00:00:00.001 Started by upstream project "autotest-per-patch" build number 126161 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "jbp-per-patch" build number 23865 00:00:00.001 originally caused by: 00:00:00.002 Started by user sys_sgci 00:00:00.103 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.104 The recommended git tool is: git 00:00:00.104 using credential 00000000-0000-0000-0000-000000000002 00:00:00.106 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.144 Fetching changes from the remote Git repository 00:00:00.146 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.185 Using shallow fetch with depth 1 00:00:00.185 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.185 > git --version # timeout=10 00:00:00.217 > git --version # 'git version 2.39.2' 00:00:00.217 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.237 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.237 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/changes/41/22241/22 # timeout=5 00:00:04.321 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.332 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.343 Checking out Revision 055051402f6bd793109ccc450ac2f885bb0fdaeb (FETCH_HEAD) 00:00:04.343 > git config core.sparsecheckout # timeout=10 00:00:04.357 > git read-tree -mu HEAD # timeout=10 00:00:04.377 > git checkout -f 055051402f6bd793109ccc450ac2f885bb0fdaeb # timeout=5 00:00:04.407 Commit message: "jenkins/jjb-config: Add release-build jobs to per-patch" 00:00:04.407 > git rev-list --no-walk 8c6732c9e0fe7c9c74cd1fb560a619e554726af3 # timeout=10 00:00:04.505 [Pipeline] Start of Pipeline 00:00:04.520 [Pipeline] library 00:00:04.521 Loading library shm_lib@master 00:00:04.521 Library shm_lib@master is cached. Copying from home. 00:00:04.533 [Pipeline] node 00:00:19.588 Still waiting to schedule task 00:00:19.589 ‘CYP11’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.589 ‘CYP13’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.589 ‘CYP7’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.589 ‘CYP8’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.589 ‘FCP03’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.589 ‘FCP04’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.589 ‘FCP07’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.589 ‘FCP08’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.589 ‘FCP09’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.589 ‘FCP10’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.589 ‘FCP11’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.589 ‘FCP12’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.589 ‘GP10’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.589 ‘GP12’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.589 ‘GP13’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.589 ‘GP14’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.589 ‘GP15’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.589 ‘GP16’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.589 ‘GP18’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.589 ‘GP19’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.589 ‘GP1’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.589 ‘GP20’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.589 ‘GP21’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.589 ‘GP22’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.589 ‘GP3’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.589 ‘GP4’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.589 ‘GP5’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.589 ‘GP6’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.589 ‘GP8’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.589 ‘GP9’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.589 ‘Jenkins’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.589 ‘ME1’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.589 ‘ME2’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.589 ‘ME3’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.589 ‘PE5’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.589 ‘SM10’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.589 ‘SM11’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.589 ‘SM1’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.589 ‘SM28’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.589 ‘SM29’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.589 ‘SM2’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.589 ‘SM30’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.589 ‘SM31’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.589 ‘SM32’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.589 ‘SM33’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.589 ‘SM34’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.589 ‘SM35’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.589 ‘SM5’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.589 ‘SM6’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.589 ‘SM7’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.589 ‘SM8’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.589 ‘VM-host-PE1’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.589 ‘VM-host-PE2’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.589 ‘VM-host-PE3’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.589 ‘VM-host-PE4’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.589 ‘VM-host-SM18’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.589 ‘VM-host-WFP1’ is offline 00:00:19.589 ‘VM-host-WFP25’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.589 ‘WCP0’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.589 ‘WCP2’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.589 ‘WCP4’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.589 ‘WFP11’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.589 ‘WFP12’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.589 ‘WFP13’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.589 ‘WFP15’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.589 ‘WFP17’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.589 ‘WFP22’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.589 ‘WFP23’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.590 ‘WFP27’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.590 ‘WFP28’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.590 ‘WFP2’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.590 ‘WFP31’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.590 ‘WFP32’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.590 ‘WFP33’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.590 ‘WFP34’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.590 ‘WFP35’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.590 ‘WFP36’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.590 ‘WFP37’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.590 ‘WFP38’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.590 ‘WFP42’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.590 ‘WFP43’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.590 ‘WFP46’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.590 ‘WFP47’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.590 ‘WFP49’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.590 ‘WFP51’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.590 ‘WFP53’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.590 ‘WFP63’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.590 ‘WFP65’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.590 ‘WFP66’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.590 ‘WFP67’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.590 ‘WFP68’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.590 ‘WFP69’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.590 ‘WFP6’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.590 ‘WFP9’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.590 ‘prc_bsc_waikikibeach64’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.590 ‘spdk-pxe-01’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.590 ‘spdk-pxe-02’ doesn’t have label ‘vagrant-vm-host’ 00:00:42.154 Running on VM-host-SM17 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:42.157 [Pipeline] { 00:00:42.171 [Pipeline] catchError 00:00:42.172 [Pipeline] { 00:00:42.189 [Pipeline] wrap 00:00:42.203 [Pipeline] { 00:00:42.214 [Pipeline] stage 00:00:42.217 [Pipeline] { (Prologue) 00:00:42.243 [Pipeline] echo 00:00:42.245 Node: VM-host-SM17 00:00:42.256 [Pipeline] cleanWs 00:00:42.267 [WS-CLEANUP] Deleting project workspace... 00:00:42.267 [WS-CLEANUP] Deferred wipeout is used... 00:00:42.273 [WS-CLEANUP] done 00:00:42.487 [Pipeline] setCustomBuildProperty 00:00:42.565 [Pipeline] httpRequest 00:00:42.588 [Pipeline] echo 00:00:42.589 Sorcerer 10.211.164.101 is alive 00:00:42.597 [Pipeline] httpRequest 00:00:42.601 HttpMethod: GET 00:00:42.601 URL: http://10.211.164.101/packages/jbp_055051402f6bd793109ccc450ac2f885bb0fdaeb.tar.gz 00:00:42.602 Sending request to url: http://10.211.164.101/packages/jbp_055051402f6bd793109ccc450ac2f885bb0fdaeb.tar.gz 00:00:42.610 Response Code: HTTP/1.1 200 OK 00:00:42.611 Success: Status code 200 is in the accepted range: 200,404 00:00:42.611 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_055051402f6bd793109ccc450ac2f885bb0fdaeb.tar.gz 00:00:44.634 [Pipeline] sh 00:00:44.909 + tar --no-same-owner -xf jbp_055051402f6bd793109ccc450ac2f885bb0fdaeb.tar.gz 00:00:44.925 [Pipeline] httpRequest 00:00:44.944 [Pipeline] echo 00:00:44.946 Sorcerer 10.211.164.101 is alive 00:00:44.954 [Pipeline] httpRequest 00:00:44.959 HttpMethod: GET 00:00:44.959 URL: http://10.211.164.101/packages/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:44.960 Sending request to url: http://10.211.164.101/packages/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:44.968 Response Code: HTTP/1.1 200 OK 00:00:44.969 Success: Status code 200 is in the accepted range: 200,404 00:00:44.969 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:55.260 [Pipeline] sh 00:00:55.535 + tar --no-same-owner -xf spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:58.826 [Pipeline] sh 00:00:59.103 + git -C spdk log --oneline -n5 00:00:59.103 719d03c6a sock/uring: only register net impl if supported 00:00:59.103 e64f085ad vbdev_lvol_ut: unify usage of dummy base bdev 00:00:59.103 9937c0160 lib/rdma: bind TRACE_BDEV_IO_START/DONE to OBJECT_NVMF_RDMA_IO 00:00:59.103 6c7c1f57e accel: add sequence outstanding stat 00:00:59.103 3bc8e6a26 accel: add utility to put task 00:00:59.122 [Pipeline] writeFile 00:00:59.139 [Pipeline] sh 00:00:59.416 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:59.428 [Pipeline] sh 00:00:59.705 + cat autorun-spdk.conf 00:00:59.706 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:59.706 SPDK_TEST_NVMF=1 00:00:59.706 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:59.706 SPDK_TEST_URING=1 00:00:59.706 SPDK_TEST_USDT=1 00:00:59.706 SPDK_RUN_UBSAN=1 00:00:59.706 NET_TYPE=virt 00:00:59.706 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:59.712 RUN_NIGHTLY=0 00:00:59.715 [Pipeline] } 00:00:59.731 [Pipeline] // stage 00:00:59.747 [Pipeline] stage 00:00:59.749 [Pipeline] { (Run VM) 00:00:59.763 [Pipeline] sh 00:01:00.041 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:00.041 + echo 'Start stage prepare_nvme.sh' 00:01:00.041 Start stage prepare_nvme.sh 00:01:00.041 + [[ -n 0 ]] 00:01:00.041 + disk_prefix=ex0 00:01:00.041 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:01:00.041 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:01:00.041 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:01:00.041 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:00.041 ++ SPDK_TEST_NVMF=1 00:01:00.041 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:00.041 ++ SPDK_TEST_URING=1 00:01:00.041 ++ SPDK_TEST_USDT=1 00:01:00.041 ++ SPDK_RUN_UBSAN=1 00:01:00.041 ++ NET_TYPE=virt 00:01:00.041 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:00.041 ++ RUN_NIGHTLY=0 00:01:00.041 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:00.041 + nvme_files=() 00:01:00.041 + declare -A nvme_files 00:01:00.041 + backend_dir=/var/lib/libvirt/images/backends 00:01:00.041 + nvme_files['nvme.img']=5G 00:01:00.041 + nvme_files['nvme-cmb.img']=5G 00:01:00.041 + nvme_files['nvme-multi0.img']=4G 00:01:00.041 + nvme_files['nvme-multi1.img']=4G 00:01:00.041 + nvme_files['nvme-multi2.img']=4G 00:01:00.041 + nvme_files['nvme-openstack.img']=8G 00:01:00.041 + nvme_files['nvme-zns.img']=5G 00:01:00.041 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:00.041 + (( SPDK_TEST_FTL == 1 )) 00:01:00.041 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:00.041 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:00.041 + for nvme in "${!nvme_files[@]}" 00:01:00.041 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi2.img -s 4G 00:01:00.041 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:00.041 + for nvme in "${!nvme_files[@]}" 00:01:00.041 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-cmb.img -s 5G 00:01:00.041 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:00.041 + for nvme in "${!nvme_files[@]}" 00:01:00.041 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-openstack.img -s 8G 00:01:00.041 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:00.041 + for nvme in "${!nvme_files[@]}" 00:01:00.041 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-zns.img -s 5G 00:01:00.041 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:00.041 + for nvme in "${!nvme_files[@]}" 00:01:00.041 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi1.img -s 4G 00:01:00.041 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:00.041 + for nvme in "${!nvme_files[@]}" 00:01:00.041 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi0.img -s 4G 00:01:00.041 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:00.041 + for nvme in "${!nvme_files[@]}" 00:01:00.041 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme.img -s 5G 00:01:00.976 Formatting '/var/lib/libvirt/images/backends/ex0-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:00.976 ++ sudo grep -rl ex0-nvme.img /etc/libvirt/qemu 00:01:00.976 + echo 'End stage prepare_nvme.sh' 00:01:00.976 End stage prepare_nvme.sh 00:01:00.987 [Pipeline] sh 00:01:01.266 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:01.266 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex0-nvme.img -b /var/lib/libvirt/images/backends/ex0-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img -H -a -v -f fedora38 00:01:01.266 00:01:01.266 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:01:01.266 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:01:01.266 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:01.266 HELP=0 00:01:01.266 DRY_RUN=0 00:01:01.266 NVME_FILE=/var/lib/libvirt/images/backends/ex0-nvme.img,/var/lib/libvirt/images/backends/ex0-nvme-multi0.img, 00:01:01.266 NVME_DISKS_TYPE=nvme,nvme, 00:01:01.266 NVME_AUTO_CREATE=0 00:01:01.266 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img, 00:01:01.266 NVME_CMB=,, 00:01:01.266 NVME_PMR=,, 00:01:01.266 NVME_ZNS=,, 00:01:01.266 NVME_MS=,, 00:01:01.266 NVME_FDP=,, 00:01:01.266 SPDK_VAGRANT_DISTRO=fedora38 00:01:01.266 SPDK_VAGRANT_VMCPU=10 00:01:01.266 SPDK_VAGRANT_VMRAM=12288 00:01:01.266 SPDK_VAGRANT_PROVIDER=libvirt 00:01:01.266 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:01.266 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:01.266 SPDK_OPENSTACK_NETWORK=0 00:01:01.266 VAGRANT_PACKAGE_BOX=0 00:01:01.266 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:01.266 FORCE_DISTRO=true 00:01:01.266 VAGRANT_BOX_VERSION= 00:01:01.266 EXTRA_VAGRANTFILES= 00:01:01.266 NIC_MODEL=e1000 00:01:01.266 00:01:01.266 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt' 00:01:01.266 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:04.546 Bringing machine 'default' up with 'libvirt' provider... 00:01:05.482 ==> default: Creating image (snapshot of base box volume). 00:01:05.483 ==> default: Creating domain with the following settings... 00:01:05.483 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721031237_f3c8af98095cd50c38c5 00:01:05.483 ==> default: -- Domain type: kvm 00:01:05.483 ==> default: -- Cpus: 10 00:01:05.483 ==> default: -- Feature: acpi 00:01:05.483 ==> default: -- Feature: apic 00:01:05.483 ==> default: -- Feature: pae 00:01:05.483 ==> default: -- Memory: 12288M 00:01:05.483 ==> default: -- Memory Backing: hugepages: 00:01:05.483 ==> default: -- Management MAC: 00:01:05.483 ==> default: -- Loader: 00:01:05.483 ==> default: -- Nvram: 00:01:05.483 ==> default: -- Base box: spdk/fedora38 00:01:05.483 ==> default: -- Storage pool: default 00:01:05.483 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721031237_f3c8af98095cd50c38c5.img (20G) 00:01:05.483 ==> default: -- Volume Cache: default 00:01:05.483 ==> default: -- Kernel: 00:01:05.483 ==> default: -- Initrd: 00:01:05.483 ==> default: -- Graphics Type: vnc 00:01:05.483 ==> default: -- Graphics Port: -1 00:01:05.483 ==> default: -- Graphics IP: 127.0.0.1 00:01:05.483 ==> default: -- Graphics Password: Not defined 00:01:05.483 ==> default: -- Video Type: cirrus 00:01:05.483 ==> default: -- Video VRAM: 9216 00:01:05.483 ==> default: -- Sound Type: 00:01:05.483 ==> default: -- Keymap: en-us 00:01:05.483 ==> default: -- TPM Path: 00:01:05.483 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:05.483 ==> default: -- Command line args: 00:01:05.483 ==> default: -> value=-device, 00:01:05.483 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:05.483 ==> default: -> value=-drive, 00:01:05.483 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme.img,if=none,id=nvme-0-drive0, 00:01:05.483 ==> default: -> value=-device, 00:01:05.483 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:05.483 ==> default: -> value=-device, 00:01:05.483 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:05.483 ==> default: -> value=-drive, 00:01:05.483 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:05.483 ==> default: -> value=-device, 00:01:05.483 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:05.483 ==> default: -> value=-drive, 00:01:05.483 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:05.483 ==> default: -> value=-device, 00:01:05.483 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:05.483 ==> default: -> value=-drive, 00:01:05.483 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:05.483 ==> default: -> value=-device, 00:01:05.483 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:05.742 ==> default: Creating shared folders metadata... 00:01:05.742 ==> default: Starting domain. 00:01:07.645 ==> default: Waiting for domain to get an IP address... 00:01:25.741 ==> default: Waiting for SSH to become available... 00:01:26.679 ==> default: Configuring and enabling network interfaces... 00:01:30.860 default: SSH address: 192.168.121.202:22 00:01:30.860 default: SSH username: vagrant 00:01:30.860 default: SSH auth method: private key 00:01:32.760 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:40.864 ==> default: Mounting SSHFS shared folder... 00:01:42.237 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:01:42.237 ==> default: Checking Mount.. 00:01:43.204 ==> default: Folder Successfully Mounted! 00:01:43.204 ==> default: Running provisioner: file... 00:01:44.138 default: ~/.gitconfig => .gitconfig 00:01:44.396 00:01:44.396 SUCCESS! 00:01:44.396 00:01:44.396 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:01:44.396 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:44.396 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:01:44.396 00:01:44.408 [Pipeline] } 00:01:44.431 [Pipeline] // stage 00:01:44.442 [Pipeline] dir 00:01:44.443 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt 00:01:44.445 [Pipeline] { 00:01:44.463 [Pipeline] catchError 00:01:44.465 [Pipeline] { 00:01:44.484 [Pipeline] sh 00:01:44.765 + vagrant ssh-config --host vagrant 00:01:44.765 + sed -ne /^Host/,$p 00:01:44.765 + tee ssh_conf 00:01:48.974 Host vagrant 00:01:48.974 HostName 192.168.121.202 00:01:48.974 User vagrant 00:01:48.974 Port 22 00:01:48.974 UserKnownHostsFile /dev/null 00:01:48.974 StrictHostKeyChecking no 00:01:48.974 PasswordAuthentication no 00:01:48.974 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:01:48.974 IdentitiesOnly yes 00:01:48.974 LogLevel FATAL 00:01:48.974 ForwardAgent yes 00:01:48.974 ForwardX11 yes 00:01:48.974 00:01:48.987 [Pipeline] withEnv 00:01:48.989 [Pipeline] { 00:01:49.005 [Pipeline] sh 00:01:49.284 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:49.284 source /etc/os-release 00:01:49.284 [[ -e /image.version ]] && img=$(< /image.version) 00:01:49.284 # Minimal, systemd-like check. 00:01:49.284 if [[ -e /.dockerenv ]]; then 00:01:49.284 # Clear garbage from the node's name: 00:01:49.284 # agt-er_autotest_547-896 -> autotest_547-896 00:01:49.284 # $HOSTNAME is the actual container id 00:01:49.284 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:49.284 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:49.284 # We can assume this is a mount from a host where container is running, 00:01:49.284 # so fetch its hostname to easily identify the target swarm worker. 00:01:49.284 container="$(< /etc/hostname) ($agent)" 00:01:49.284 else 00:01:49.284 # Fallback 00:01:49.284 container=$agent 00:01:49.284 fi 00:01:49.284 fi 00:01:49.284 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:49.284 00:01:49.297 [Pipeline] } 00:01:49.317 [Pipeline] // withEnv 00:01:49.327 [Pipeline] setCustomBuildProperty 00:01:49.343 [Pipeline] stage 00:01:49.345 [Pipeline] { (Tests) 00:01:49.364 [Pipeline] sh 00:01:49.642 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:49.656 [Pipeline] sh 00:01:49.935 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:49.950 [Pipeline] timeout 00:01:49.950 Timeout set to expire in 30 min 00:01:49.952 [Pipeline] { 00:01:49.965 [Pipeline] sh 00:01:50.254 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:50.821 HEAD is now at 719d03c6a sock/uring: only register net impl if supported 00:01:50.832 [Pipeline] sh 00:01:51.107 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:51.127 [Pipeline] sh 00:01:51.437 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:51.454 [Pipeline] sh 00:01:51.734 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:01:51.734 ++ readlink -f spdk_repo 00:01:51.734 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:51.734 + [[ -n /home/vagrant/spdk_repo ]] 00:01:51.734 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:51.734 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:51.734 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:51.734 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:51.734 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:51.734 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:01:51.734 + cd /home/vagrant/spdk_repo 00:01:51.734 + source /etc/os-release 00:01:51.734 ++ NAME='Fedora Linux' 00:01:51.734 ++ VERSION='38 (Cloud Edition)' 00:01:51.734 ++ ID=fedora 00:01:51.734 ++ VERSION_ID=38 00:01:51.734 ++ VERSION_CODENAME= 00:01:51.734 ++ PLATFORM_ID=platform:f38 00:01:51.734 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:51.734 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:51.734 ++ LOGO=fedora-logo-icon 00:01:51.734 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:51.734 ++ HOME_URL=https://fedoraproject.org/ 00:01:51.734 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:51.734 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:51.734 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:51.734 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:51.734 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:51.734 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:51.734 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:51.734 ++ SUPPORT_END=2024-05-14 00:01:51.734 ++ VARIANT='Cloud Edition' 00:01:51.734 ++ VARIANT_ID=cloud 00:01:51.734 + uname -a 00:01:51.734 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:51.734 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:52.300 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:52.300 Hugepages 00:01:52.300 node hugesize free / total 00:01:52.300 node0 1048576kB 0 / 0 00:01:52.300 node0 2048kB 0 / 0 00:01:52.300 00:01:52.301 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:52.301 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:52.301 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:52.301 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:52.301 + rm -f /tmp/spdk-ld-path 00:01:52.301 + source autorun-spdk.conf 00:01:52.301 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:52.301 ++ SPDK_TEST_NVMF=1 00:01:52.301 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:52.301 ++ SPDK_TEST_URING=1 00:01:52.301 ++ SPDK_TEST_USDT=1 00:01:52.301 ++ SPDK_RUN_UBSAN=1 00:01:52.301 ++ NET_TYPE=virt 00:01:52.301 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:52.301 ++ RUN_NIGHTLY=0 00:01:52.301 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:52.301 + [[ -n '' ]] 00:01:52.301 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:52.301 + for M in /var/spdk/build-*-manifest.txt 00:01:52.301 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:52.301 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:52.301 + for M in /var/spdk/build-*-manifest.txt 00:01:52.301 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:52.301 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:52.301 ++ uname 00:01:52.301 + [[ Linux == \L\i\n\u\x ]] 00:01:52.301 + sudo dmesg -T 00:01:52.560 + sudo dmesg --clear 00:01:52.560 + dmesg_pid=5098 00:01:52.560 + sudo dmesg -Tw 00:01:52.560 + [[ Fedora Linux == FreeBSD ]] 00:01:52.560 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:52.560 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:52.560 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:52.560 + [[ -x /usr/src/fio-static/fio ]] 00:01:52.560 + export FIO_BIN=/usr/src/fio-static/fio 00:01:52.560 + FIO_BIN=/usr/src/fio-static/fio 00:01:52.560 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:52.560 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:52.560 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:52.560 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:52.560 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:52.560 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:52.560 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:52.560 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:52.560 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:52.560 Test configuration: 00:01:52.560 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:52.560 SPDK_TEST_NVMF=1 00:01:52.560 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:52.560 SPDK_TEST_URING=1 00:01:52.560 SPDK_TEST_USDT=1 00:01:52.560 SPDK_RUN_UBSAN=1 00:01:52.560 NET_TYPE=virt 00:01:52.560 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:52.560 RUN_NIGHTLY=0 08:14:44 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:52.560 08:14:44 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:52.560 08:14:44 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:52.560 08:14:44 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:52.560 08:14:44 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:52.560 08:14:44 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:52.560 08:14:44 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:52.560 08:14:44 -- paths/export.sh@5 -- $ export PATH 00:01:52.560 08:14:44 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:52.560 08:14:44 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:52.560 08:14:44 -- common/autobuild_common.sh@444 -- $ date +%s 00:01:52.560 08:14:44 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721031284.XXXXXX 00:01:52.560 08:14:44 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721031284.JTyEAo 00:01:52.560 08:14:44 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:01:52.560 08:14:44 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:01:52.560 08:14:44 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:52.560 08:14:44 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:52.560 08:14:44 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:52.560 08:14:44 -- common/autobuild_common.sh@460 -- $ get_config_params 00:01:52.560 08:14:44 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:52.560 08:14:44 -- common/autotest_common.sh@10 -- $ set +x 00:01:52.560 08:14:44 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:01:52.560 08:14:44 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:01:52.560 08:14:44 -- pm/common@17 -- $ local monitor 00:01:52.560 08:14:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:52.560 08:14:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:52.560 08:14:44 -- pm/common@25 -- $ sleep 1 00:01:52.560 08:14:44 -- pm/common@21 -- $ date +%s 00:01:52.560 08:14:44 -- pm/common@21 -- $ date +%s 00:01:52.560 08:14:44 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721031284 00:01:52.560 08:14:44 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721031284 00:01:52.560 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721031284_collect-vmstat.pm.log 00:01:52.560 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721031284_collect-cpu-load.pm.log 00:01:53.498 08:14:45 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:01:53.498 08:14:45 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:53.498 08:14:45 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:53.498 08:14:45 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:53.498 08:14:45 -- spdk/autobuild.sh@16 -- $ date -u 00:01:53.498 Mon Jul 15 08:14:45 AM UTC 2024 00:01:53.498 08:14:45 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:53.498 v24.09-pre-202-g719d03c6a 00:01:53.498 08:14:45 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:53.498 08:14:45 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:53.498 08:14:45 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:53.498 08:14:45 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:53.498 08:14:45 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:53.498 08:14:45 -- common/autotest_common.sh@10 -- $ set +x 00:01:53.498 ************************************ 00:01:53.498 START TEST ubsan 00:01:53.498 ************************************ 00:01:53.498 using ubsan 00:01:53.498 08:14:45 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:53.498 00:01:53.498 real 0m0.000s 00:01:53.498 user 0m0.000s 00:01:53.498 sys 0m0.000s 00:01:53.498 08:14:45 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:53.498 08:14:45 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:53.498 ************************************ 00:01:53.498 END TEST ubsan 00:01:53.498 ************************************ 00:01:53.757 08:14:45 -- common/autotest_common.sh@1142 -- $ return 0 00:01:53.757 08:14:45 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:53.757 08:14:45 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:53.757 08:14:45 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:53.757 08:14:45 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:53.758 08:14:45 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:53.758 08:14:45 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:53.758 08:14:45 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:53.758 08:14:45 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:53.758 08:14:45 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-shared 00:01:53.758 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:53.758 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:54.015 Using 'verbs' RDMA provider 00:02:07.170 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:19.366 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:19.366 Creating mk/config.mk...done. 00:02:19.366 Creating mk/cc.flags.mk...done. 00:02:19.366 Type 'make' to build. 00:02:19.366 08:15:11 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:02:19.366 08:15:11 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:19.366 08:15:11 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:19.366 08:15:11 -- common/autotest_common.sh@10 -- $ set +x 00:02:19.366 ************************************ 00:02:19.366 START TEST make 00:02:19.366 ************************************ 00:02:19.366 08:15:11 make -- common/autotest_common.sh@1123 -- $ make -j10 00:02:19.624 make[1]: Nothing to be done for 'all'. 00:02:31.938 The Meson build system 00:02:31.939 Version: 1.3.1 00:02:31.939 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:31.939 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:31.939 Build type: native build 00:02:31.939 Program cat found: YES (/usr/bin/cat) 00:02:31.939 Project name: DPDK 00:02:31.939 Project version: 24.03.0 00:02:31.939 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:31.939 C linker for the host machine: cc ld.bfd 2.39-16 00:02:31.939 Host machine cpu family: x86_64 00:02:31.939 Host machine cpu: x86_64 00:02:31.939 Message: ## Building in Developer Mode ## 00:02:31.939 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:31.939 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:31.939 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:31.939 Program python3 found: YES (/usr/bin/python3) 00:02:31.939 Program cat found: YES (/usr/bin/cat) 00:02:31.939 Compiler for C supports arguments -march=native: YES 00:02:31.939 Checking for size of "void *" : 8 00:02:31.939 Checking for size of "void *" : 8 (cached) 00:02:31.939 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:31.939 Library m found: YES 00:02:31.939 Library numa found: YES 00:02:31.939 Has header "numaif.h" : YES 00:02:31.939 Library fdt found: NO 00:02:31.939 Library execinfo found: NO 00:02:31.939 Has header "execinfo.h" : YES 00:02:31.939 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:31.939 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:31.939 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:31.939 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:31.939 Run-time dependency openssl found: YES 3.0.9 00:02:31.939 Run-time dependency libpcap found: YES 1.10.4 00:02:31.939 Has header "pcap.h" with dependency libpcap: YES 00:02:31.939 Compiler for C supports arguments -Wcast-qual: YES 00:02:31.939 Compiler for C supports arguments -Wdeprecated: YES 00:02:31.939 Compiler for C supports arguments -Wformat: YES 00:02:31.939 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:31.939 Compiler for C supports arguments -Wformat-security: NO 00:02:31.939 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:31.939 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:31.939 Compiler for C supports arguments -Wnested-externs: YES 00:02:31.939 Compiler for C supports arguments -Wold-style-definition: YES 00:02:31.939 Compiler for C supports arguments -Wpointer-arith: YES 00:02:31.939 Compiler for C supports arguments -Wsign-compare: YES 00:02:31.939 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:31.939 Compiler for C supports arguments -Wundef: YES 00:02:31.939 Compiler for C supports arguments -Wwrite-strings: YES 00:02:31.939 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:31.939 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:31.939 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:31.939 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:31.939 Program objdump found: YES (/usr/bin/objdump) 00:02:31.939 Compiler for C supports arguments -mavx512f: YES 00:02:31.939 Checking if "AVX512 checking" compiles: YES 00:02:31.939 Fetching value of define "__SSE4_2__" : 1 00:02:31.939 Fetching value of define "__AES__" : 1 00:02:31.939 Fetching value of define "__AVX__" : 1 00:02:31.939 Fetching value of define "__AVX2__" : 1 00:02:31.939 Fetching value of define "__AVX512BW__" : (undefined) 00:02:31.939 Fetching value of define "__AVX512CD__" : (undefined) 00:02:31.939 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:31.939 Fetching value of define "__AVX512F__" : (undefined) 00:02:31.939 Fetching value of define "__AVX512VL__" : (undefined) 00:02:31.939 Fetching value of define "__PCLMUL__" : 1 00:02:31.939 Fetching value of define "__RDRND__" : 1 00:02:31.939 Fetching value of define "__RDSEED__" : 1 00:02:31.939 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:31.939 Fetching value of define "__znver1__" : (undefined) 00:02:31.939 Fetching value of define "__znver2__" : (undefined) 00:02:31.939 Fetching value of define "__znver3__" : (undefined) 00:02:31.939 Fetching value of define "__znver4__" : (undefined) 00:02:31.939 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:31.939 Message: lib/log: Defining dependency "log" 00:02:31.939 Message: lib/kvargs: Defining dependency "kvargs" 00:02:31.939 Message: lib/telemetry: Defining dependency "telemetry" 00:02:31.939 Checking for function "getentropy" : NO 00:02:31.939 Message: lib/eal: Defining dependency "eal" 00:02:31.939 Message: lib/ring: Defining dependency "ring" 00:02:31.939 Message: lib/rcu: Defining dependency "rcu" 00:02:31.939 Message: lib/mempool: Defining dependency "mempool" 00:02:31.939 Message: lib/mbuf: Defining dependency "mbuf" 00:02:31.939 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:31.939 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:31.939 Compiler for C supports arguments -mpclmul: YES 00:02:31.939 Compiler for C supports arguments -maes: YES 00:02:31.939 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:31.939 Compiler for C supports arguments -mavx512bw: YES 00:02:31.939 Compiler for C supports arguments -mavx512dq: YES 00:02:31.939 Compiler for C supports arguments -mavx512vl: YES 00:02:31.939 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:31.939 Compiler for C supports arguments -mavx2: YES 00:02:31.939 Compiler for C supports arguments -mavx: YES 00:02:31.939 Message: lib/net: Defining dependency "net" 00:02:31.939 Message: lib/meter: Defining dependency "meter" 00:02:31.939 Message: lib/ethdev: Defining dependency "ethdev" 00:02:31.939 Message: lib/pci: Defining dependency "pci" 00:02:31.939 Message: lib/cmdline: Defining dependency "cmdline" 00:02:31.939 Message: lib/hash: Defining dependency "hash" 00:02:31.939 Message: lib/timer: Defining dependency "timer" 00:02:31.939 Message: lib/compressdev: Defining dependency "compressdev" 00:02:31.939 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:31.939 Message: lib/dmadev: Defining dependency "dmadev" 00:02:31.939 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:31.939 Message: lib/power: Defining dependency "power" 00:02:31.939 Message: lib/reorder: Defining dependency "reorder" 00:02:31.939 Message: lib/security: Defining dependency "security" 00:02:31.939 Has header "linux/userfaultfd.h" : YES 00:02:31.939 Has header "linux/vduse.h" : YES 00:02:31.939 Message: lib/vhost: Defining dependency "vhost" 00:02:31.939 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:31.939 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:31.939 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:31.939 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:31.939 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:31.939 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:31.939 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:31.939 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:31.939 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:31.939 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:31.939 Program doxygen found: YES (/usr/bin/doxygen) 00:02:31.939 Configuring doxy-api-html.conf using configuration 00:02:31.939 Configuring doxy-api-man.conf using configuration 00:02:31.939 Program mandb found: YES (/usr/bin/mandb) 00:02:31.939 Program sphinx-build found: NO 00:02:31.939 Configuring rte_build_config.h using configuration 00:02:31.939 Message: 00:02:31.939 ================= 00:02:31.939 Applications Enabled 00:02:31.939 ================= 00:02:31.939 00:02:31.939 apps: 00:02:31.939 00:02:31.939 00:02:31.939 Message: 00:02:31.939 ================= 00:02:31.939 Libraries Enabled 00:02:31.939 ================= 00:02:31.939 00:02:31.939 libs: 00:02:31.939 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:31.939 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:31.939 cryptodev, dmadev, power, reorder, security, vhost, 00:02:31.939 00:02:31.939 Message: 00:02:31.939 =============== 00:02:31.939 Drivers Enabled 00:02:31.939 =============== 00:02:31.939 00:02:31.939 common: 00:02:31.939 00:02:31.939 bus: 00:02:31.939 pci, vdev, 00:02:31.940 mempool: 00:02:31.940 ring, 00:02:31.940 dma: 00:02:31.940 00:02:31.940 net: 00:02:31.940 00:02:31.940 crypto: 00:02:31.940 00:02:31.940 compress: 00:02:31.940 00:02:31.940 vdpa: 00:02:31.940 00:02:31.940 00:02:31.940 Message: 00:02:31.940 ================= 00:02:31.940 Content Skipped 00:02:31.940 ================= 00:02:31.940 00:02:31.940 apps: 00:02:31.940 dumpcap: explicitly disabled via build config 00:02:31.940 graph: explicitly disabled via build config 00:02:31.940 pdump: explicitly disabled via build config 00:02:31.940 proc-info: explicitly disabled via build config 00:02:31.940 test-acl: explicitly disabled via build config 00:02:31.940 test-bbdev: explicitly disabled via build config 00:02:31.940 test-cmdline: explicitly disabled via build config 00:02:31.940 test-compress-perf: explicitly disabled via build config 00:02:31.940 test-crypto-perf: explicitly disabled via build config 00:02:31.940 test-dma-perf: explicitly disabled via build config 00:02:31.940 test-eventdev: explicitly disabled via build config 00:02:31.940 test-fib: explicitly disabled via build config 00:02:31.940 test-flow-perf: explicitly disabled via build config 00:02:31.940 test-gpudev: explicitly disabled via build config 00:02:31.940 test-mldev: explicitly disabled via build config 00:02:31.940 test-pipeline: explicitly disabled via build config 00:02:31.940 test-pmd: explicitly disabled via build config 00:02:31.940 test-regex: explicitly disabled via build config 00:02:31.940 test-sad: explicitly disabled via build config 00:02:31.940 test-security-perf: explicitly disabled via build config 00:02:31.940 00:02:31.940 libs: 00:02:31.940 argparse: explicitly disabled via build config 00:02:31.940 metrics: explicitly disabled via build config 00:02:31.940 acl: explicitly disabled via build config 00:02:31.940 bbdev: explicitly disabled via build config 00:02:31.940 bitratestats: explicitly disabled via build config 00:02:31.940 bpf: explicitly disabled via build config 00:02:31.940 cfgfile: explicitly disabled via build config 00:02:31.940 distributor: explicitly disabled via build config 00:02:31.940 efd: explicitly disabled via build config 00:02:31.940 eventdev: explicitly disabled via build config 00:02:31.940 dispatcher: explicitly disabled via build config 00:02:31.940 gpudev: explicitly disabled via build config 00:02:31.940 gro: explicitly disabled via build config 00:02:31.940 gso: explicitly disabled via build config 00:02:31.940 ip_frag: explicitly disabled via build config 00:02:31.940 jobstats: explicitly disabled via build config 00:02:31.940 latencystats: explicitly disabled via build config 00:02:31.940 lpm: explicitly disabled via build config 00:02:31.940 member: explicitly disabled via build config 00:02:31.940 pcapng: explicitly disabled via build config 00:02:31.940 rawdev: explicitly disabled via build config 00:02:31.940 regexdev: explicitly disabled via build config 00:02:31.940 mldev: explicitly disabled via build config 00:02:31.940 rib: explicitly disabled via build config 00:02:31.940 sched: explicitly disabled via build config 00:02:31.940 stack: explicitly disabled via build config 00:02:31.940 ipsec: explicitly disabled via build config 00:02:31.940 pdcp: explicitly disabled via build config 00:02:31.940 fib: explicitly disabled via build config 00:02:31.940 port: explicitly disabled via build config 00:02:31.940 pdump: explicitly disabled via build config 00:02:31.940 table: explicitly disabled via build config 00:02:31.940 pipeline: explicitly disabled via build config 00:02:31.940 graph: explicitly disabled via build config 00:02:31.940 node: explicitly disabled via build config 00:02:31.940 00:02:31.940 drivers: 00:02:31.940 common/cpt: not in enabled drivers build config 00:02:31.940 common/dpaax: not in enabled drivers build config 00:02:31.940 common/iavf: not in enabled drivers build config 00:02:31.940 common/idpf: not in enabled drivers build config 00:02:31.940 common/ionic: not in enabled drivers build config 00:02:31.940 common/mvep: not in enabled drivers build config 00:02:31.940 common/octeontx: not in enabled drivers build config 00:02:31.940 bus/auxiliary: not in enabled drivers build config 00:02:31.940 bus/cdx: not in enabled drivers build config 00:02:31.940 bus/dpaa: not in enabled drivers build config 00:02:31.940 bus/fslmc: not in enabled drivers build config 00:02:31.940 bus/ifpga: not in enabled drivers build config 00:02:31.940 bus/platform: not in enabled drivers build config 00:02:31.940 bus/uacce: not in enabled drivers build config 00:02:31.940 bus/vmbus: not in enabled drivers build config 00:02:31.940 common/cnxk: not in enabled drivers build config 00:02:31.940 common/mlx5: not in enabled drivers build config 00:02:31.940 common/nfp: not in enabled drivers build config 00:02:31.940 common/nitrox: not in enabled drivers build config 00:02:31.940 common/qat: not in enabled drivers build config 00:02:31.940 common/sfc_efx: not in enabled drivers build config 00:02:31.940 mempool/bucket: not in enabled drivers build config 00:02:31.940 mempool/cnxk: not in enabled drivers build config 00:02:31.940 mempool/dpaa: not in enabled drivers build config 00:02:31.940 mempool/dpaa2: not in enabled drivers build config 00:02:31.940 mempool/octeontx: not in enabled drivers build config 00:02:31.940 mempool/stack: not in enabled drivers build config 00:02:31.940 dma/cnxk: not in enabled drivers build config 00:02:31.940 dma/dpaa: not in enabled drivers build config 00:02:31.940 dma/dpaa2: not in enabled drivers build config 00:02:31.940 dma/hisilicon: not in enabled drivers build config 00:02:31.940 dma/idxd: not in enabled drivers build config 00:02:31.940 dma/ioat: not in enabled drivers build config 00:02:31.940 dma/skeleton: not in enabled drivers build config 00:02:31.940 net/af_packet: not in enabled drivers build config 00:02:31.940 net/af_xdp: not in enabled drivers build config 00:02:31.940 net/ark: not in enabled drivers build config 00:02:31.940 net/atlantic: not in enabled drivers build config 00:02:31.940 net/avp: not in enabled drivers build config 00:02:31.940 net/axgbe: not in enabled drivers build config 00:02:31.940 net/bnx2x: not in enabled drivers build config 00:02:31.940 net/bnxt: not in enabled drivers build config 00:02:31.940 net/bonding: not in enabled drivers build config 00:02:31.940 net/cnxk: not in enabled drivers build config 00:02:31.940 net/cpfl: not in enabled drivers build config 00:02:31.940 net/cxgbe: not in enabled drivers build config 00:02:31.940 net/dpaa: not in enabled drivers build config 00:02:31.940 net/dpaa2: not in enabled drivers build config 00:02:31.940 net/e1000: not in enabled drivers build config 00:02:31.940 net/ena: not in enabled drivers build config 00:02:31.940 net/enetc: not in enabled drivers build config 00:02:31.940 net/enetfec: not in enabled drivers build config 00:02:31.940 net/enic: not in enabled drivers build config 00:02:31.940 net/failsafe: not in enabled drivers build config 00:02:31.940 net/fm10k: not in enabled drivers build config 00:02:31.940 net/gve: not in enabled drivers build config 00:02:31.940 net/hinic: not in enabled drivers build config 00:02:31.940 net/hns3: not in enabled drivers build config 00:02:31.940 net/i40e: not in enabled drivers build config 00:02:31.940 net/iavf: not in enabled drivers build config 00:02:31.940 net/ice: not in enabled drivers build config 00:02:31.940 net/idpf: not in enabled drivers build config 00:02:31.940 net/igc: not in enabled drivers build config 00:02:31.940 net/ionic: not in enabled drivers build config 00:02:31.940 net/ipn3ke: not in enabled drivers build config 00:02:31.940 net/ixgbe: not in enabled drivers build config 00:02:31.940 net/mana: not in enabled drivers build config 00:02:31.940 net/memif: not in enabled drivers build config 00:02:31.940 net/mlx4: not in enabled drivers build config 00:02:31.940 net/mlx5: not in enabled drivers build config 00:02:31.940 net/mvneta: not in enabled drivers build config 00:02:31.940 net/mvpp2: not in enabled drivers build config 00:02:31.940 net/netvsc: not in enabled drivers build config 00:02:31.940 net/nfb: not in enabled drivers build config 00:02:31.940 net/nfp: not in enabled drivers build config 00:02:31.940 net/ngbe: not in enabled drivers build config 00:02:31.940 net/null: not in enabled drivers build config 00:02:31.940 net/octeontx: not in enabled drivers build config 00:02:31.940 net/octeon_ep: not in enabled drivers build config 00:02:31.940 net/pcap: not in enabled drivers build config 00:02:31.940 net/pfe: not in enabled drivers build config 00:02:31.940 net/qede: not in enabled drivers build config 00:02:31.940 net/ring: not in enabled drivers build config 00:02:31.940 net/sfc: not in enabled drivers build config 00:02:31.940 net/softnic: not in enabled drivers build config 00:02:31.940 net/tap: not in enabled drivers build config 00:02:31.940 net/thunderx: not in enabled drivers build config 00:02:31.940 net/txgbe: not in enabled drivers build config 00:02:31.941 net/vdev_netvsc: not in enabled drivers build config 00:02:31.941 net/vhost: not in enabled drivers build config 00:02:31.941 net/virtio: not in enabled drivers build config 00:02:31.941 net/vmxnet3: not in enabled drivers build config 00:02:31.941 raw/*: missing internal dependency, "rawdev" 00:02:31.941 crypto/armv8: not in enabled drivers build config 00:02:31.941 crypto/bcmfs: not in enabled drivers build config 00:02:31.941 crypto/caam_jr: not in enabled drivers build config 00:02:31.941 crypto/ccp: not in enabled drivers build config 00:02:31.941 crypto/cnxk: not in enabled drivers build config 00:02:31.941 crypto/dpaa_sec: not in enabled drivers build config 00:02:31.941 crypto/dpaa2_sec: not in enabled drivers build config 00:02:31.941 crypto/ipsec_mb: not in enabled drivers build config 00:02:31.941 crypto/mlx5: not in enabled drivers build config 00:02:31.941 crypto/mvsam: not in enabled drivers build config 00:02:31.941 crypto/nitrox: not in enabled drivers build config 00:02:31.941 crypto/null: not in enabled drivers build config 00:02:31.941 crypto/octeontx: not in enabled drivers build config 00:02:31.941 crypto/openssl: not in enabled drivers build config 00:02:31.941 crypto/scheduler: not in enabled drivers build config 00:02:31.941 crypto/uadk: not in enabled drivers build config 00:02:31.941 crypto/virtio: not in enabled drivers build config 00:02:31.941 compress/isal: not in enabled drivers build config 00:02:31.941 compress/mlx5: not in enabled drivers build config 00:02:31.941 compress/nitrox: not in enabled drivers build config 00:02:31.941 compress/octeontx: not in enabled drivers build config 00:02:31.941 compress/zlib: not in enabled drivers build config 00:02:31.941 regex/*: missing internal dependency, "regexdev" 00:02:31.941 ml/*: missing internal dependency, "mldev" 00:02:31.941 vdpa/ifc: not in enabled drivers build config 00:02:31.941 vdpa/mlx5: not in enabled drivers build config 00:02:31.941 vdpa/nfp: not in enabled drivers build config 00:02:31.941 vdpa/sfc: not in enabled drivers build config 00:02:31.941 event/*: missing internal dependency, "eventdev" 00:02:31.941 baseband/*: missing internal dependency, "bbdev" 00:02:31.941 gpu/*: missing internal dependency, "gpudev" 00:02:31.941 00:02:31.941 00:02:31.941 Build targets in project: 85 00:02:31.941 00:02:31.941 DPDK 24.03.0 00:02:31.941 00:02:31.941 User defined options 00:02:31.941 buildtype : debug 00:02:31.941 default_library : shared 00:02:31.941 libdir : lib 00:02:31.941 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:31.941 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:31.941 c_link_args : 00:02:31.941 cpu_instruction_set: native 00:02:31.941 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:31.941 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:31.941 enable_docs : false 00:02:31.941 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:31.941 enable_kmods : false 00:02:31.941 max_lcores : 128 00:02:31.941 tests : false 00:02:31.941 00:02:31.941 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:31.941 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:31.941 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:31.941 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:31.941 [3/268] Linking static target lib/librte_kvargs.a 00:02:31.941 [4/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:31.941 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:31.941 [6/268] Linking static target lib/librte_log.a 00:02:32.200 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:32.200 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:32.200 [9/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.459 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:32.459 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:32.459 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:32.717 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:32.717 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:32.717 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:32.717 [16/268] Linking static target lib/librte_telemetry.a 00:02:32.977 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:32.977 [18/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.977 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:32.977 [20/268] Linking target lib/librte_log.so.24.1 00:02:33.546 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:33.546 [22/268] Linking target lib/librte_kvargs.so.24.1 00:02:33.546 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:33.546 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:33.804 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:33.804 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:33.804 [27/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.804 [28/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:33.804 [29/268] Linking target lib/librte_telemetry.so.24.1 00:02:33.804 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:33.804 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:34.063 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:34.063 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:34.063 [34/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:34.322 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:34.580 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:34.580 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:34.862 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:34.862 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:34.862 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:34.862 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:34.862 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:34.862 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:34.862 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:35.120 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:35.379 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:35.379 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:35.638 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:35.897 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:35.897 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:35.897 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:35.897 [52/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:36.156 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:36.156 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:36.415 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:36.674 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:36.674 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:36.674 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:36.674 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:36.932 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:36.932 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:36.932 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:37.190 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:37.449 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:37.707 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:37.965 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:37.965 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:37.965 [68/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:38.223 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:38.223 [70/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:38.223 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:38.482 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:38.739 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:38.739 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:38.739 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:38.739 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:38.739 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:38.739 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:38.998 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:39.256 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:39.256 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:39.256 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:39.256 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:39.514 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:39.771 [85/268] Linking static target lib/librte_eal.a 00:02:40.029 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:40.029 [87/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:40.029 [88/268] Linking static target lib/librte_rcu.a 00:02:40.029 [89/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:40.029 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:40.029 [91/268] Linking static target lib/librte_ring.a 00:02:40.029 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:40.288 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:40.288 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:40.288 [95/268] Linking static target lib/librte_mempool.a 00:02:40.546 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:40.804 [97/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.804 [98/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.804 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:40.804 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:41.062 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:41.062 [102/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:41.062 [103/268] Linking static target lib/librte_mbuf.a 00:02:41.318 [104/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:41.883 [105/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.883 [106/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:41.883 [107/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:41.883 [108/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:42.141 [109/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:42.141 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:42.141 [111/268] Linking static target lib/librte_meter.a 00:02:42.141 [112/268] Linking static target lib/librte_net.a 00:02:42.399 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:42.399 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:42.656 [115/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.656 [116/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.915 [117/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.173 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:43.431 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:43.689 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:44.623 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:44.623 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:44.623 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:44.623 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:44.882 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:44.882 [126/268] Linking static target lib/librte_pci.a 00:02:44.882 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:45.141 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:45.141 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:45.141 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:45.399 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:45.399 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:45.399 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:45.399 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:45.399 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:45.399 [136/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.659 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:45.659 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:45.659 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:45.659 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:45.659 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:45.659 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:45.659 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:45.659 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:45.917 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:46.852 [146/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:46.852 [147/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:46.852 [148/268] Linking static target lib/librte_ethdev.a 00:02:46.852 [149/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:46.852 [150/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:46.852 [151/268] Linking static target lib/librte_cmdline.a 00:02:47.110 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:47.110 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:47.368 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:47.368 [155/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:47.368 [156/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:47.368 [157/268] Linking static target lib/librte_hash.a 00:02:47.368 [158/268] Linking static target lib/librte_timer.a 00:02:47.626 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:47.626 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:47.626 [161/268] Linking static target lib/librte_compressdev.a 00:02:47.939 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:48.198 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:48.198 [164/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.198 [165/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:48.457 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:48.457 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:48.457 [168/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:48.457 [169/268] Linking static target lib/librte_cryptodev.a 00:02:48.715 [170/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.715 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:48.715 [172/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:48.715 [173/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:48.715 [174/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:48.715 [175/268] Linking static target lib/librte_dmadev.a 00:02:48.715 [176/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.972 [177/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.972 [178/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:49.229 [179/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:49.229 [180/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:49.486 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:49.486 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:49.486 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:49.744 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:49.744 [185/268] Linking static target lib/librte_power.a 00:02:49.744 [186/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:49.744 [187/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.744 [188/268] Linking static target lib/librte_reorder.a 00:02:50.001 [189/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:50.001 [190/268] Linking static target lib/librte_security.a 00:02:50.001 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:50.259 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:50.259 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:50.516 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.080 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:51.080 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:51.080 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.337 [198/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.337 [199/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:51.594 [200/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.594 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:51.851 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:52.146 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:52.146 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:52.146 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:52.146 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:52.404 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:52.404 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:52.404 [209/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:52.404 [210/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:52.404 [211/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:52.663 [212/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:52.663 [213/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:52.663 [214/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:52.663 [215/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:52.663 [216/268] Linking static target drivers/librte_bus_pci.a 00:02:52.663 [217/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:52.663 [218/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:52.663 [219/268] Linking static target drivers/librte_bus_vdev.a 00:02:52.922 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:52.922 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:52.922 [222/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:52.922 [223/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.185 [224/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:53.185 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:53.185 [226/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:53.185 [227/268] Linking static target drivers/librte_mempool_ring.a 00:02:53.185 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.446 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:53.704 [230/268] Linking static target lib/librte_vhost.a 00:02:53.962 [231/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.962 [232/268] Linking target lib/librte_eal.so.24.1 00:02:54.220 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:54.220 [234/268] Linking target lib/librte_dmadev.so.24.1 00:02:54.220 [235/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:54.220 [236/268] Linking target lib/librte_meter.so.24.1 00:02:54.220 [237/268] Linking target lib/librte_ring.so.24.1 00:02:54.220 [238/268] Linking target lib/librte_timer.so.24.1 00:02:54.220 [239/268] Linking target lib/librte_pci.so.24.1 00:02:54.479 [240/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:54.479 [241/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:54.479 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:54.479 [243/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:54.479 [244/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:54.479 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:54.479 [246/268] Linking target lib/librte_rcu.so.24.1 00:02:54.479 [247/268] Linking target lib/librte_mempool.so.24.1 00:02:54.741 [248/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:54.741 [249/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:54.741 [250/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:54.741 [251/268] Linking target lib/librte_mbuf.so.24.1 00:02:54.741 [252/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:54.998 [253/268] Linking target lib/librte_cryptodev.so.24.1 00:02:54.998 [254/268] Linking target lib/librte_compressdev.so.24.1 00:02:54.998 [255/268] Linking target lib/librte_net.so.24.1 00:02:54.998 [256/268] Linking target lib/librte_reorder.so.24.1 00:02:54.998 [257/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:54.998 [258/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:54.998 [259/268] Linking target lib/librte_security.so.24.1 00:02:54.998 [260/268] Linking target lib/librte_hash.so.24.1 00:02:54.998 [261/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.998 [262/268] Linking target lib/librte_cmdline.so.24.1 00:02:55.256 [263/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:55.848 [264/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.848 [265/268] Linking target lib/librte_ethdev.so.24.1 00:02:55.848 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:56.106 [267/268] Linking target lib/librte_power.so.24.1 00:02:56.106 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:56.106 INFO: autodetecting backend as ninja 00:02:56.106 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:57.481 CC lib/ut_mock/mock.o 00:02:57.481 CC lib/ut/ut.o 00:02:57.481 CC lib/log/log.o 00:02:57.481 CC lib/log/log_flags.o 00:02:57.481 CC lib/log/log_deprecated.o 00:02:57.481 LIB libspdk_ut.a 00:02:57.481 LIB libspdk_ut_mock.a 00:02:57.481 LIB libspdk_log.a 00:02:57.481 SO libspdk_ut.so.2.0 00:02:57.481 SO libspdk_ut_mock.so.6.0 00:02:57.481 SO libspdk_log.so.7.0 00:02:57.481 SYMLINK libspdk_ut_mock.so 00:02:57.481 SYMLINK libspdk_ut.so 00:02:57.481 SYMLINK libspdk_log.so 00:02:57.739 CC lib/util/base64.o 00:02:57.739 CC lib/util/bit_array.o 00:02:57.739 CC lib/util/cpuset.o 00:02:57.739 CC lib/util/crc16.o 00:02:57.739 CC lib/util/crc32c.o 00:02:57.739 CC lib/util/crc32.o 00:02:57.739 CC lib/dma/dma.o 00:02:57.739 CXX lib/trace_parser/trace.o 00:02:57.739 CC lib/ioat/ioat.o 00:02:57.739 CC lib/vfio_user/host/vfio_user_pci.o 00:02:57.998 CC lib/util/crc32_ieee.o 00:02:57.998 CC lib/util/crc64.o 00:02:57.998 CC lib/vfio_user/host/vfio_user.o 00:02:57.998 LIB libspdk_dma.a 00:02:57.998 CC lib/util/dif.o 00:02:57.998 CC lib/util/fd.o 00:02:57.998 SO libspdk_dma.so.4.0 00:02:57.998 CC lib/util/file.o 00:02:57.998 SYMLINK libspdk_dma.so 00:02:57.998 CC lib/util/hexlify.o 00:02:57.998 CC lib/util/iov.o 00:02:57.998 CC lib/util/math.o 00:02:58.257 LIB libspdk_ioat.a 00:02:58.257 CC lib/util/pipe.o 00:02:58.257 LIB libspdk_vfio_user.a 00:02:58.257 CC lib/util/strerror_tls.o 00:02:58.257 SO libspdk_ioat.so.7.0 00:02:58.257 SO libspdk_vfio_user.so.5.0 00:02:58.257 CC lib/util/string.o 00:02:58.257 SYMLINK libspdk_ioat.so 00:02:58.257 CC lib/util/uuid.o 00:02:58.257 SYMLINK libspdk_vfio_user.so 00:02:58.257 CC lib/util/fd_group.o 00:02:58.257 CC lib/util/xor.o 00:02:58.257 CC lib/util/zipf.o 00:02:58.515 LIB libspdk_util.a 00:02:58.773 SO libspdk_util.so.9.1 00:02:59.054 SYMLINK libspdk_util.so 00:02:59.054 LIB libspdk_trace_parser.a 00:02:59.054 SO libspdk_trace_parser.so.5.0 00:02:59.054 CC lib/idxd/idxd.o 00:02:59.054 CC lib/env_dpdk/env.o 00:02:59.054 CC lib/vmd/vmd.o 00:02:59.054 CC lib/rdma_provider/common.o 00:02:59.054 CC lib/json/json_parse.o 00:02:59.054 CC lib/json/json_util.o 00:02:59.054 CC lib/rdma_utils/rdma_utils.o 00:02:59.054 CC lib/json/json_write.o 00:02:59.054 CC lib/conf/conf.o 00:02:59.312 SYMLINK libspdk_trace_parser.so 00:02:59.312 CC lib/idxd/idxd_user.o 00:02:59.312 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:59.571 LIB libspdk_conf.a 00:02:59.571 SO libspdk_conf.so.6.0 00:02:59.571 LIB libspdk_rdma_provider.a 00:02:59.571 CC lib/vmd/led.o 00:02:59.571 CC lib/env_dpdk/memory.o 00:02:59.571 SYMLINK libspdk_conf.so 00:02:59.571 LIB libspdk_rdma_utils.a 00:02:59.571 SO libspdk_rdma_provider.so.6.0 00:02:59.571 CC lib/idxd/idxd_kernel.o 00:02:59.571 LIB libspdk_json.a 00:02:59.571 SO libspdk_rdma_utils.so.1.0 00:02:59.829 SO libspdk_json.so.6.0 00:02:59.829 CC lib/env_dpdk/pci.o 00:02:59.829 CC lib/env_dpdk/init.o 00:02:59.829 SYMLINK libspdk_rdma_provider.so 00:02:59.829 CC lib/env_dpdk/threads.o 00:02:59.829 SYMLINK libspdk_rdma_utils.so 00:02:59.829 SYMLINK libspdk_json.so 00:02:59.829 CC lib/env_dpdk/pci_ioat.o 00:02:59.829 CC lib/env_dpdk/pci_virtio.o 00:02:59.829 LIB libspdk_vmd.a 00:02:59.829 LIB libspdk_idxd.a 00:02:59.829 SO libspdk_vmd.so.6.0 00:02:59.829 CC lib/env_dpdk/pci_vmd.o 00:03:00.087 SO libspdk_idxd.so.12.0 00:03:00.087 CC lib/env_dpdk/pci_idxd.o 00:03:00.087 CC lib/env_dpdk/pci_event.o 00:03:00.087 SYMLINK libspdk_vmd.so 00:03:00.087 CC lib/env_dpdk/sigbus_handler.o 00:03:00.087 CC lib/jsonrpc/jsonrpc_server.o 00:03:00.087 SYMLINK libspdk_idxd.so 00:03:00.087 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:00.087 CC lib/jsonrpc/jsonrpc_client.o 00:03:00.087 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:00.087 CC lib/env_dpdk/pci_dpdk.o 00:03:00.087 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:00.345 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:00.345 LIB libspdk_jsonrpc.a 00:03:00.604 SO libspdk_jsonrpc.so.6.0 00:03:00.604 SYMLINK libspdk_jsonrpc.so 00:03:00.863 CC lib/rpc/rpc.o 00:03:01.121 LIB libspdk_rpc.a 00:03:01.121 SO libspdk_rpc.so.6.0 00:03:01.121 LIB libspdk_env_dpdk.a 00:03:01.121 SYMLINK libspdk_rpc.so 00:03:01.121 SO libspdk_env_dpdk.so.14.1 00:03:01.379 CC lib/trace/trace.o 00:03:01.379 CC lib/trace/trace_flags.o 00:03:01.379 CC lib/trace/trace_rpc.o 00:03:01.379 CC lib/notify/notify_rpc.o 00:03:01.379 CC lib/notify/notify.o 00:03:01.379 CC lib/keyring/keyring.o 00:03:01.379 CC lib/keyring/keyring_rpc.o 00:03:01.637 SYMLINK libspdk_env_dpdk.so 00:03:01.637 LIB libspdk_notify.a 00:03:01.637 LIB libspdk_trace.a 00:03:01.637 SO libspdk_notify.so.6.0 00:03:01.637 SO libspdk_trace.so.10.0 00:03:01.637 SYMLINK libspdk_notify.so 00:03:01.895 SYMLINK libspdk_trace.so 00:03:01.896 LIB libspdk_keyring.a 00:03:01.896 SO libspdk_keyring.so.1.0 00:03:01.896 SYMLINK libspdk_keyring.so 00:03:01.896 CC lib/thread/thread.o 00:03:01.896 CC lib/thread/iobuf.o 00:03:01.896 CC lib/sock/sock.o 00:03:01.896 CC lib/sock/sock_rpc.o 00:03:02.462 LIB libspdk_sock.a 00:03:02.462 SO libspdk_sock.so.10.0 00:03:02.462 SYMLINK libspdk_sock.so 00:03:02.720 CC lib/nvme/nvme_ctrlr.o 00:03:02.720 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:02.720 CC lib/nvme/nvme_ns_cmd.o 00:03:02.720 CC lib/nvme/nvme_fabric.o 00:03:02.720 CC lib/nvme/nvme_ns.o 00:03:02.720 CC lib/nvme/nvme_pcie.o 00:03:02.720 CC lib/nvme/nvme_pcie_common.o 00:03:02.720 CC lib/nvme/nvme_qpair.o 00:03:02.720 CC lib/nvme/nvme.o 00:03:04.095 CC lib/nvme/nvme_quirks.o 00:03:04.095 CC lib/nvme/nvme_transport.o 00:03:04.095 CC lib/nvme/nvme_discovery.o 00:03:04.095 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:04.095 LIB libspdk_thread.a 00:03:04.095 SO libspdk_thread.so.10.1 00:03:04.095 SYMLINK libspdk_thread.so 00:03:04.095 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:04.095 CC lib/nvme/nvme_tcp.o 00:03:04.095 CC lib/nvme/nvme_opal.o 00:03:04.354 CC lib/accel/accel.o 00:03:04.354 CC lib/blob/blobstore.o 00:03:04.354 CC lib/blob/request.o 00:03:04.612 CC lib/blob/zeroes.o 00:03:04.612 CC lib/blob/blob_bs_dev.o 00:03:04.612 CC lib/accel/accel_rpc.o 00:03:04.612 CC lib/nvme/nvme_io_msg.o 00:03:04.612 CC lib/accel/accel_sw.o 00:03:04.871 CC lib/nvme/nvme_poll_group.o 00:03:04.871 CC lib/nvme/nvme_zns.o 00:03:04.871 CC lib/nvme/nvme_stubs.o 00:03:04.871 CC lib/nvme/nvme_auth.o 00:03:05.129 CC lib/nvme/nvme_cuse.o 00:03:05.387 LIB libspdk_accel.a 00:03:05.387 CC lib/nvme/nvme_rdma.o 00:03:05.387 SO libspdk_accel.so.15.1 00:03:05.645 SYMLINK libspdk_accel.so 00:03:05.903 CC lib/init/json_config.o 00:03:05.903 CC lib/virtio/virtio.o 00:03:05.903 CC lib/init/subsystem.o 00:03:05.903 CC lib/bdev/bdev.o 00:03:05.903 CC lib/init/subsystem_rpc.o 00:03:05.903 CC lib/init/rpc.o 00:03:06.162 CC lib/bdev/bdev_rpc.o 00:03:06.162 CC lib/virtio/virtio_vhost_user.o 00:03:06.162 CC lib/virtio/virtio_vfio_user.o 00:03:06.162 CC lib/virtio/virtio_pci.o 00:03:06.162 CC lib/bdev/bdev_zone.o 00:03:06.162 CC lib/bdev/part.o 00:03:06.420 LIB libspdk_init.a 00:03:06.420 CC lib/bdev/scsi_nvme.o 00:03:06.420 SO libspdk_init.so.5.0 00:03:06.420 SYMLINK libspdk_init.so 00:03:06.420 LIB libspdk_virtio.a 00:03:06.678 SO libspdk_virtio.so.7.0 00:03:06.678 SYMLINK libspdk_virtio.so 00:03:06.678 CC lib/event/reactor.o 00:03:06.678 CC lib/event/app.o 00:03:06.678 CC lib/event/log_rpc.o 00:03:06.678 CC lib/event/scheduler_static.o 00:03:06.678 CC lib/event/app_rpc.o 00:03:06.937 LIB libspdk_nvme.a 00:03:07.195 SO libspdk_nvme.so.13.1 00:03:07.454 SYMLINK libspdk_nvme.so 00:03:07.454 LIB libspdk_event.a 00:03:07.454 SO libspdk_event.so.14.0 00:03:07.454 SYMLINK libspdk_event.so 00:03:08.388 LIB libspdk_blob.a 00:03:08.388 SO libspdk_blob.so.11.0 00:03:08.388 SYMLINK libspdk_blob.so 00:03:08.646 CC lib/lvol/lvol.o 00:03:08.646 CC lib/blobfs/blobfs.o 00:03:08.646 CC lib/blobfs/tree.o 00:03:08.904 LIB libspdk_bdev.a 00:03:08.904 SO libspdk_bdev.so.15.1 00:03:09.162 SYMLINK libspdk_bdev.so 00:03:09.421 CC lib/nbd/nbd.o 00:03:09.421 CC lib/nbd/nbd_rpc.o 00:03:09.421 CC lib/nvmf/ctrlr.o 00:03:09.421 CC lib/nvmf/ctrlr_discovery.o 00:03:09.421 CC lib/nvmf/ctrlr_bdev.o 00:03:09.421 CC lib/scsi/dev.o 00:03:09.421 CC lib/ftl/ftl_core.o 00:03:09.421 CC lib/ublk/ublk.o 00:03:09.421 CC lib/ftl/ftl_init.o 00:03:09.679 CC lib/scsi/lun.o 00:03:09.679 LIB libspdk_blobfs.a 00:03:09.679 LIB libspdk_lvol.a 00:03:09.679 CC lib/ftl/ftl_layout.o 00:03:09.679 SO libspdk_lvol.so.10.0 00:03:09.679 SO libspdk_blobfs.so.10.0 00:03:09.679 CC lib/ftl/ftl_debug.o 00:03:09.938 SYMLINK libspdk_lvol.so 00:03:09.938 SYMLINK libspdk_blobfs.so 00:03:09.938 CC lib/nvmf/subsystem.o 00:03:09.938 CC lib/ftl/ftl_io.o 00:03:09.938 CC lib/nvmf/nvmf.o 00:03:09.938 LIB libspdk_nbd.a 00:03:09.938 CC lib/scsi/port.o 00:03:09.938 SO libspdk_nbd.so.7.0 00:03:09.938 CC lib/ublk/ublk_rpc.o 00:03:09.938 SYMLINK libspdk_nbd.so 00:03:09.938 CC lib/nvmf/nvmf_rpc.o 00:03:09.938 CC lib/ftl/ftl_sb.o 00:03:10.195 CC lib/scsi/scsi.o 00:03:10.195 CC lib/ftl/ftl_l2p.o 00:03:10.195 CC lib/scsi/scsi_bdev.o 00:03:10.195 LIB libspdk_ublk.a 00:03:10.195 SO libspdk_ublk.so.3.0 00:03:10.195 CC lib/ftl/ftl_l2p_flat.o 00:03:10.195 CC lib/ftl/ftl_nv_cache.o 00:03:10.195 CC lib/ftl/ftl_band.o 00:03:10.454 SYMLINK libspdk_ublk.so 00:03:10.454 CC lib/nvmf/transport.o 00:03:10.454 CC lib/nvmf/tcp.o 00:03:10.712 CC lib/nvmf/stubs.o 00:03:10.712 CC lib/scsi/scsi_pr.o 00:03:10.712 CC lib/scsi/scsi_rpc.o 00:03:10.969 CC lib/scsi/task.o 00:03:10.969 CC lib/ftl/ftl_band_ops.o 00:03:10.969 CC lib/nvmf/mdns_server.o 00:03:10.969 CC lib/nvmf/rdma.o 00:03:11.236 LIB libspdk_scsi.a 00:03:11.236 CC lib/ftl/ftl_writer.o 00:03:11.236 SO libspdk_scsi.so.9.0 00:03:11.236 CC lib/nvmf/auth.o 00:03:11.236 CC lib/ftl/ftl_rq.o 00:03:11.236 CC lib/ftl/ftl_reloc.o 00:03:11.236 SYMLINK libspdk_scsi.so 00:03:11.236 CC lib/ftl/ftl_l2p_cache.o 00:03:11.492 CC lib/ftl/ftl_p2l.o 00:03:11.492 CC lib/ftl/mngt/ftl_mngt.o 00:03:11.492 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:11.492 CC lib/iscsi/conn.o 00:03:11.749 CC lib/vhost/vhost.o 00:03:11.749 CC lib/vhost/vhost_rpc.o 00:03:11.749 CC lib/vhost/vhost_scsi.o 00:03:11.749 CC lib/vhost/vhost_blk.o 00:03:11.749 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:11.749 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:12.007 CC lib/vhost/rte_vhost_user.o 00:03:12.007 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:12.007 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:12.264 CC lib/iscsi/init_grp.o 00:03:12.264 CC lib/iscsi/iscsi.o 00:03:12.264 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:12.264 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:12.264 CC lib/iscsi/md5.o 00:03:12.563 CC lib/iscsi/param.o 00:03:12.563 CC lib/iscsi/portal_grp.o 00:03:12.563 CC lib/iscsi/tgt_node.o 00:03:12.563 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:12.563 CC lib/iscsi/iscsi_subsystem.o 00:03:12.822 CC lib/iscsi/iscsi_rpc.o 00:03:12.822 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:12.822 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:12.822 CC lib/iscsi/task.o 00:03:13.081 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:13.081 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:13.081 CC lib/ftl/utils/ftl_conf.o 00:03:13.081 CC lib/ftl/utils/ftl_md.o 00:03:13.081 LIB libspdk_vhost.a 00:03:13.081 CC lib/ftl/utils/ftl_mempool.o 00:03:13.339 CC lib/ftl/utils/ftl_bitmap.o 00:03:13.339 SO libspdk_vhost.so.8.0 00:03:13.339 CC lib/ftl/utils/ftl_property.o 00:03:13.339 LIB libspdk_nvmf.a 00:03:13.339 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:13.339 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:13.339 SYMLINK libspdk_vhost.so 00:03:13.339 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:13.339 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:13.596 SO libspdk_nvmf.so.18.1 00:03:13.596 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:13.596 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:13.596 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:13.596 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:13.596 LIB libspdk_iscsi.a 00:03:13.851 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:13.851 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:13.851 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:13.851 CC lib/ftl/base/ftl_base_dev.o 00:03:13.851 CC lib/ftl/base/ftl_base_bdev.o 00:03:13.851 SO libspdk_iscsi.so.8.0 00:03:13.851 CC lib/ftl/ftl_trace.o 00:03:13.851 SYMLINK libspdk_nvmf.so 00:03:14.108 SYMLINK libspdk_iscsi.so 00:03:14.108 LIB libspdk_ftl.a 00:03:14.366 SO libspdk_ftl.so.9.0 00:03:14.625 SYMLINK libspdk_ftl.so 00:03:15.193 CC module/env_dpdk/env_dpdk_rpc.o 00:03:15.193 CC module/keyring/linux/keyring.o 00:03:15.193 CC module/accel/iaa/accel_iaa.o 00:03:15.193 CC module/accel/dsa/accel_dsa.o 00:03:15.193 CC module/accel/ioat/accel_ioat.o 00:03:15.193 CC module/keyring/file/keyring.o 00:03:15.193 CC module/blob/bdev/blob_bdev.o 00:03:15.193 CC module/accel/error/accel_error.o 00:03:15.193 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:15.193 CC module/sock/posix/posix.o 00:03:15.193 LIB libspdk_env_dpdk_rpc.a 00:03:15.452 SO libspdk_env_dpdk_rpc.so.6.0 00:03:15.452 CC module/keyring/linux/keyring_rpc.o 00:03:15.452 CC module/accel/ioat/accel_ioat_rpc.o 00:03:15.452 SYMLINK libspdk_env_dpdk_rpc.so 00:03:15.452 CC module/keyring/file/keyring_rpc.o 00:03:15.452 LIB libspdk_scheduler_dynamic.a 00:03:15.452 SO libspdk_scheduler_dynamic.so.4.0 00:03:15.452 CC module/accel/iaa/accel_iaa_rpc.o 00:03:15.452 LIB libspdk_keyring_linux.a 00:03:15.452 CC module/accel/dsa/accel_dsa_rpc.o 00:03:15.452 SYMLINK libspdk_scheduler_dynamic.so 00:03:15.710 LIB libspdk_accel_ioat.a 00:03:15.710 CC module/accel/error/accel_error_rpc.o 00:03:15.710 SO libspdk_keyring_linux.so.1.0 00:03:15.710 SO libspdk_accel_ioat.so.6.0 00:03:15.710 LIB libspdk_accel_iaa.a 00:03:15.710 LIB libspdk_keyring_file.a 00:03:15.710 SO libspdk_accel_iaa.so.3.0 00:03:15.710 LIB libspdk_blob_bdev.a 00:03:15.710 SYMLINK libspdk_keyring_linux.so 00:03:15.710 LIB libspdk_accel_error.a 00:03:15.710 SYMLINK libspdk_accel_ioat.so 00:03:15.710 SO libspdk_blob_bdev.so.11.0 00:03:15.710 SO libspdk_keyring_file.so.1.0 00:03:15.710 LIB libspdk_accel_dsa.a 00:03:15.710 SO libspdk_accel_error.so.2.0 00:03:15.710 SYMLINK libspdk_accel_iaa.so 00:03:15.710 CC module/scheduler/gscheduler/gscheduler.o 00:03:15.710 SO libspdk_accel_dsa.so.5.0 00:03:15.710 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:15.968 SYMLINK libspdk_keyring_file.so 00:03:15.968 SYMLINK libspdk_blob_bdev.so 00:03:15.968 SYMLINK libspdk_accel_error.so 00:03:15.968 SYMLINK libspdk_accel_dsa.so 00:03:15.968 LIB libspdk_scheduler_gscheduler.a 00:03:15.968 CC module/sock/uring/uring.o 00:03:15.968 SO libspdk_scheduler_gscheduler.so.4.0 00:03:15.968 LIB libspdk_scheduler_dpdk_governor.a 00:03:15.968 SYMLINK libspdk_scheduler_gscheduler.so 00:03:16.225 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:16.225 CC module/bdev/delay/vbdev_delay.o 00:03:16.225 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:16.225 CC module/bdev/lvol/vbdev_lvol.o 00:03:16.225 CC module/bdev/error/vbdev_error.o 00:03:16.225 CC module/bdev/gpt/gpt.o 00:03:16.225 CC module/bdev/malloc/bdev_malloc.o 00:03:16.225 CC module/blobfs/bdev/blobfs_bdev.o 00:03:16.225 LIB libspdk_sock_posix.a 00:03:16.225 SO libspdk_sock_posix.so.6.0 00:03:16.225 CC module/bdev/null/bdev_null.o 00:03:16.483 SYMLINK libspdk_sock_posix.so 00:03:16.483 CC module/bdev/null/bdev_null_rpc.o 00:03:16.483 CC module/bdev/gpt/vbdev_gpt.o 00:03:16.483 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:16.483 CC module/bdev/nvme/bdev_nvme.o 00:03:16.483 CC module/bdev/error/vbdev_error_rpc.o 00:03:16.483 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:16.483 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:16.483 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:16.483 LIB libspdk_bdev_null.a 00:03:16.740 SO libspdk_bdev_null.so.6.0 00:03:16.740 LIB libspdk_blobfs_bdev.a 00:03:16.740 SO libspdk_blobfs_bdev.so.6.0 00:03:16.740 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:16.740 LIB libspdk_bdev_gpt.a 00:03:16.740 LIB libspdk_bdev_error.a 00:03:16.740 SYMLINK libspdk_bdev_null.so 00:03:16.740 CC module/bdev/nvme/nvme_rpc.o 00:03:16.740 SO libspdk_bdev_error.so.6.0 00:03:16.740 SO libspdk_bdev_gpt.so.6.0 00:03:16.740 SYMLINK libspdk_blobfs_bdev.so 00:03:16.740 LIB libspdk_bdev_delay.a 00:03:16.740 LIB libspdk_bdev_malloc.a 00:03:16.740 SO libspdk_bdev_delay.so.6.0 00:03:16.740 SO libspdk_bdev_malloc.so.6.0 00:03:16.740 SYMLINK libspdk_bdev_gpt.so 00:03:16.740 LIB libspdk_sock_uring.a 00:03:16.740 SYMLINK libspdk_bdev_error.so 00:03:16.998 SO libspdk_sock_uring.so.5.0 00:03:16.998 SYMLINK libspdk_bdev_malloc.so 00:03:16.998 SYMLINK libspdk_bdev_delay.so 00:03:16.998 SYMLINK libspdk_sock_uring.so 00:03:16.998 CC module/bdev/nvme/bdev_mdns_client.o 00:03:16.998 CC module/bdev/passthru/vbdev_passthru.o 00:03:16.998 CC module/bdev/raid/bdev_raid.o 00:03:16.998 CC module/bdev/split/vbdev_split.o 00:03:16.998 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:16.998 LIB libspdk_bdev_lvol.a 00:03:17.256 CC module/bdev/uring/bdev_uring.o 00:03:17.256 SO libspdk_bdev_lvol.so.6.0 00:03:17.256 CC module/bdev/aio/bdev_aio.o 00:03:17.256 CC module/bdev/uring/bdev_uring_rpc.o 00:03:17.256 CC module/bdev/aio/bdev_aio_rpc.o 00:03:17.256 SYMLINK libspdk_bdev_lvol.so 00:03:17.256 CC module/bdev/split/vbdev_split_rpc.o 00:03:17.514 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:17.514 CC module/bdev/raid/bdev_raid_rpc.o 00:03:17.514 CC module/bdev/nvme/vbdev_opal.o 00:03:17.514 CC module/bdev/ftl/bdev_ftl.o 00:03:17.514 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:17.514 LIB libspdk_bdev_aio.a 00:03:17.514 LIB libspdk_bdev_uring.a 00:03:17.514 LIB libspdk_bdev_split.a 00:03:17.514 LIB libspdk_bdev_passthru.a 00:03:17.514 SO libspdk_bdev_aio.so.6.0 00:03:17.514 SO libspdk_bdev_uring.so.6.0 00:03:17.514 SO libspdk_bdev_split.so.6.0 00:03:17.514 SO libspdk_bdev_passthru.so.6.0 00:03:17.772 LIB libspdk_bdev_zone_block.a 00:03:17.772 SYMLINK libspdk_bdev_uring.so 00:03:17.772 SYMLINK libspdk_bdev_aio.so 00:03:17.772 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:17.772 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:17.772 SYMLINK libspdk_bdev_split.so 00:03:17.772 SYMLINK libspdk_bdev_passthru.so 00:03:17.772 CC module/bdev/raid/bdev_raid_sb.o 00:03:17.772 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:17.772 SO libspdk_bdev_zone_block.so.6.0 00:03:17.772 CC module/bdev/raid/raid0.o 00:03:17.772 SYMLINK libspdk_bdev_zone_block.so 00:03:17.772 CC module/bdev/raid/raid1.o 00:03:17.772 CC module/bdev/raid/concat.o 00:03:17.772 CC module/bdev/iscsi/bdev_iscsi.o 00:03:18.031 LIB libspdk_bdev_ftl.a 00:03:18.031 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:18.031 SO libspdk_bdev_ftl.so.6.0 00:03:18.031 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:18.031 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:18.031 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:18.031 SYMLINK libspdk_bdev_ftl.so 00:03:18.290 LIB libspdk_bdev_raid.a 00:03:18.290 LIB libspdk_bdev_iscsi.a 00:03:18.290 SO libspdk_bdev_raid.so.6.0 00:03:18.290 SO libspdk_bdev_iscsi.so.6.0 00:03:18.290 SYMLINK libspdk_bdev_iscsi.so 00:03:18.290 SYMLINK libspdk_bdev_raid.so 00:03:18.548 LIB libspdk_bdev_virtio.a 00:03:18.548 SO libspdk_bdev_virtio.so.6.0 00:03:18.806 SYMLINK libspdk_bdev_virtio.so 00:03:19.064 LIB libspdk_bdev_nvme.a 00:03:19.064 SO libspdk_bdev_nvme.so.7.0 00:03:19.350 SYMLINK libspdk_bdev_nvme.so 00:03:19.607 CC module/event/subsystems/iobuf/iobuf.o 00:03:19.607 CC module/event/subsystems/sock/sock.o 00:03:19.607 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:19.607 CC module/event/subsystems/keyring/keyring.o 00:03:19.865 CC module/event/subsystems/vmd/vmd.o 00:03:19.865 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:19.865 CC module/event/subsystems/scheduler/scheduler.o 00:03:19.865 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:19.865 LIB libspdk_event_sock.a 00:03:19.865 LIB libspdk_event_scheduler.a 00:03:19.865 LIB libspdk_event_vmd.a 00:03:19.865 SO libspdk_event_sock.so.5.0 00:03:19.865 LIB libspdk_event_iobuf.a 00:03:19.865 SO libspdk_event_scheduler.so.4.0 00:03:19.865 LIB libspdk_event_keyring.a 00:03:19.865 LIB libspdk_event_vhost_blk.a 00:03:19.865 SO libspdk_event_vmd.so.6.0 00:03:19.865 SO libspdk_event_keyring.so.1.0 00:03:19.865 SO libspdk_event_iobuf.so.3.0 00:03:19.865 SO libspdk_event_vhost_blk.so.3.0 00:03:19.865 SYMLINK libspdk_event_scheduler.so 00:03:20.123 SYMLINK libspdk_event_sock.so 00:03:20.123 SYMLINK libspdk_event_keyring.so 00:03:20.123 SYMLINK libspdk_event_vmd.so 00:03:20.123 SYMLINK libspdk_event_vhost_blk.so 00:03:20.123 SYMLINK libspdk_event_iobuf.so 00:03:20.382 CC module/event/subsystems/accel/accel.o 00:03:20.382 LIB libspdk_event_accel.a 00:03:20.382 SO libspdk_event_accel.so.6.0 00:03:20.640 SYMLINK libspdk_event_accel.so 00:03:20.899 CC module/event/subsystems/bdev/bdev.o 00:03:20.899 LIB libspdk_event_bdev.a 00:03:21.158 SO libspdk_event_bdev.so.6.0 00:03:21.158 SYMLINK libspdk_event_bdev.so 00:03:21.416 CC module/event/subsystems/scsi/scsi.o 00:03:21.416 CC module/event/subsystems/nbd/nbd.o 00:03:21.416 CC module/event/subsystems/ublk/ublk.o 00:03:21.416 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:21.417 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:21.417 LIB libspdk_event_nbd.a 00:03:21.674 LIB libspdk_event_ublk.a 00:03:21.674 SO libspdk_event_nbd.so.6.0 00:03:21.674 LIB libspdk_event_scsi.a 00:03:21.674 SO libspdk_event_ublk.so.3.0 00:03:21.674 SYMLINK libspdk_event_nbd.so 00:03:21.674 SO libspdk_event_scsi.so.6.0 00:03:21.674 LIB libspdk_event_nvmf.a 00:03:21.674 SYMLINK libspdk_event_ublk.so 00:03:21.674 SYMLINK libspdk_event_scsi.so 00:03:21.674 SO libspdk_event_nvmf.so.6.0 00:03:21.932 SYMLINK libspdk_event_nvmf.so 00:03:21.932 CC module/event/subsystems/iscsi/iscsi.o 00:03:21.932 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:22.191 LIB libspdk_event_vhost_scsi.a 00:03:22.191 LIB libspdk_event_iscsi.a 00:03:22.191 SO libspdk_event_vhost_scsi.so.3.0 00:03:22.191 SO libspdk_event_iscsi.so.6.0 00:03:22.191 SYMLINK libspdk_event_vhost_scsi.so 00:03:22.191 SYMLINK libspdk_event_iscsi.so 00:03:22.449 SO libspdk.so.6.0 00:03:22.449 SYMLINK libspdk.so 00:03:22.707 TEST_HEADER include/spdk/accel.h 00:03:22.707 TEST_HEADER include/spdk/accel_module.h 00:03:22.707 TEST_HEADER include/spdk/assert.h 00:03:22.707 TEST_HEADER include/spdk/barrier.h 00:03:22.707 TEST_HEADER include/spdk/base64.h 00:03:22.707 CXX app/trace/trace.o 00:03:22.707 TEST_HEADER include/spdk/bdev.h 00:03:22.707 TEST_HEADER include/spdk/bdev_module.h 00:03:22.707 CC app/trace_record/trace_record.o 00:03:22.707 TEST_HEADER include/spdk/bdev_zone.h 00:03:22.707 TEST_HEADER include/spdk/bit_array.h 00:03:22.707 TEST_HEADER include/spdk/bit_pool.h 00:03:22.707 TEST_HEADER include/spdk/blob_bdev.h 00:03:22.707 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:22.707 TEST_HEADER include/spdk/blobfs.h 00:03:22.707 TEST_HEADER include/spdk/blob.h 00:03:22.707 TEST_HEADER include/spdk/conf.h 00:03:22.707 TEST_HEADER include/spdk/config.h 00:03:22.707 TEST_HEADER include/spdk/cpuset.h 00:03:22.707 TEST_HEADER include/spdk/crc16.h 00:03:22.707 TEST_HEADER include/spdk/crc32.h 00:03:22.707 TEST_HEADER include/spdk/crc64.h 00:03:22.707 TEST_HEADER include/spdk/dif.h 00:03:22.707 TEST_HEADER include/spdk/dma.h 00:03:22.707 CC app/iscsi_tgt/iscsi_tgt.o 00:03:22.707 TEST_HEADER include/spdk/endian.h 00:03:22.707 TEST_HEADER include/spdk/env_dpdk.h 00:03:22.707 TEST_HEADER include/spdk/env.h 00:03:22.707 TEST_HEADER include/spdk/event.h 00:03:22.707 TEST_HEADER include/spdk/fd_group.h 00:03:22.707 TEST_HEADER include/spdk/fd.h 00:03:22.707 TEST_HEADER include/spdk/file.h 00:03:22.707 CC app/nvmf_tgt/nvmf_main.o 00:03:22.707 TEST_HEADER include/spdk/ftl.h 00:03:22.707 TEST_HEADER include/spdk/gpt_spec.h 00:03:22.707 TEST_HEADER include/spdk/hexlify.h 00:03:22.707 TEST_HEADER include/spdk/histogram_data.h 00:03:22.707 TEST_HEADER include/spdk/idxd.h 00:03:22.707 TEST_HEADER include/spdk/idxd_spec.h 00:03:22.707 TEST_HEADER include/spdk/init.h 00:03:22.707 TEST_HEADER include/spdk/ioat.h 00:03:22.707 TEST_HEADER include/spdk/ioat_spec.h 00:03:22.707 TEST_HEADER include/spdk/iscsi_spec.h 00:03:22.707 TEST_HEADER include/spdk/json.h 00:03:22.707 TEST_HEADER include/spdk/jsonrpc.h 00:03:22.707 TEST_HEADER include/spdk/keyring.h 00:03:22.707 TEST_HEADER include/spdk/keyring_module.h 00:03:22.707 CC examples/ioat/perf/perf.o 00:03:22.707 TEST_HEADER include/spdk/likely.h 00:03:22.707 TEST_HEADER include/spdk/log.h 00:03:22.707 TEST_HEADER include/spdk/lvol.h 00:03:22.707 TEST_HEADER include/spdk/memory.h 00:03:22.707 TEST_HEADER include/spdk/mmio.h 00:03:22.707 TEST_HEADER include/spdk/nbd.h 00:03:22.707 TEST_HEADER include/spdk/notify.h 00:03:22.707 TEST_HEADER include/spdk/nvme.h 00:03:22.707 TEST_HEADER include/spdk/nvme_intel.h 00:03:22.707 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:22.707 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:22.707 CC examples/util/zipf/zipf.o 00:03:22.707 TEST_HEADER include/spdk/nvme_spec.h 00:03:22.707 TEST_HEADER include/spdk/nvme_zns.h 00:03:22.707 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:22.707 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:22.707 TEST_HEADER include/spdk/nvmf.h 00:03:22.707 CC test/thread/poller_perf/poller_perf.o 00:03:22.707 TEST_HEADER include/spdk/nvmf_spec.h 00:03:22.707 TEST_HEADER include/spdk/nvmf_transport.h 00:03:22.707 TEST_HEADER include/spdk/opal.h 00:03:22.707 TEST_HEADER include/spdk/opal_spec.h 00:03:22.707 TEST_HEADER include/spdk/pci_ids.h 00:03:22.707 TEST_HEADER include/spdk/pipe.h 00:03:22.707 TEST_HEADER include/spdk/queue.h 00:03:22.707 TEST_HEADER include/spdk/reduce.h 00:03:22.707 TEST_HEADER include/spdk/rpc.h 00:03:22.707 TEST_HEADER include/spdk/scheduler.h 00:03:22.707 CC test/app/bdev_svc/bdev_svc.o 00:03:22.707 TEST_HEADER include/spdk/scsi.h 00:03:22.707 TEST_HEADER include/spdk/scsi_spec.h 00:03:22.707 TEST_HEADER include/spdk/sock.h 00:03:22.707 TEST_HEADER include/spdk/stdinc.h 00:03:22.707 CC test/dma/test_dma/test_dma.o 00:03:22.707 TEST_HEADER include/spdk/string.h 00:03:22.707 TEST_HEADER include/spdk/thread.h 00:03:22.707 TEST_HEADER include/spdk/trace.h 00:03:22.707 TEST_HEADER include/spdk/trace_parser.h 00:03:22.707 TEST_HEADER include/spdk/tree.h 00:03:22.707 TEST_HEADER include/spdk/ublk.h 00:03:22.707 TEST_HEADER include/spdk/util.h 00:03:22.707 TEST_HEADER include/spdk/uuid.h 00:03:22.707 TEST_HEADER include/spdk/version.h 00:03:22.707 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:22.707 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:22.707 TEST_HEADER include/spdk/vhost.h 00:03:22.965 TEST_HEADER include/spdk/vmd.h 00:03:22.965 TEST_HEADER include/spdk/xor.h 00:03:22.965 TEST_HEADER include/spdk/zipf.h 00:03:22.965 CXX test/cpp_headers/accel.o 00:03:22.965 LINK iscsi_tgt 00:03:22.965 LINK spdk_trace_record 00:03:22.965 LINK nvmf_tgt 00:03:22.965 LINK zipf 00:03:22.965 LINK poller_perf 00:03:22.965 LINK bdev_svc 00:03:22.965 LINK spdk_trace 00:03:22.965 CXX test/cpp_headers/accel_module.o 00:03:23.223 LINK ioat_perf 00:03:23.223 LINK test_dma 00:03:23.223 CXX test/cpp_headers/assert.o 00:03:23.481 CC examples/ioat/verify/verify.o 00:03:23.481 CC app/spdk_tgt/spdk_tgt.o 00:03:23.481 CC test/app/histogram_perf/histogram_perf.o 00:03:23.481 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:23.481 CC test/app/jsoncat/jsoncat.o 00:03:23.481 CXX test/cpp_headers/barrier.o 00:03:23.481 CXX test/cpp_headers/base64.o 00:03:23.481 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:23.481 CC app/spdk_lspci/spdk_lspci.o 00:03:23.481 LINK verify 00:03:23.481 LINK histogram_perf 00:03:23.481 CC test/env/mem_callbacks/mem_callbacks.o 00:03:23.739 LINK interrupt_tgt 00:03:23.739 LINK spdk_tgt 00:03:23.739 CXX test/cpp_headers/bdev.o 00:03:23.739 LINK spdk_lspci 00:03:23.739 LINK jsoncat 00:03:23.739 CC app/spdk_nvme_perf/perf.o 00:03:23.997 CXX test/cpp_headers/bdev_module.o 00:03:23.997 CC test/env/vtophys/vtophys.o 00:03:23.997 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:23.997 CXX test/cpp_headers/bdev_zone.o 00:03:23.997 CXX test/cpp_headers/bit_array.o 00:03:23.997 CC test/app/stub/stub.o 00:03:23.997 LINK nvme_fuzz 00:03:24.254 CC examples/thread/thread/thread_ex.o 00:03:24.254 LINK vtophys 00:03:24.254 LINK env_dpdk_post_init 00:03:24.254 LINK stub 00:03:24.254 CXX test/cpp_headers/bit_pool.o 00:03:24.513 CC test/event/event_perf/event_perf.o 00:03:24.513 LINK mem_callbacks 00:03:24.513 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:24.513 CC test/event/reactor/reactor.o 00:03:24.513 LINK thread 00:03:24.513 CXX test/cpp_headers/blob_bdev.o 00:03:24.513 CXX test/cpp_headers/blobfs_bdev.o 00:03:24.513 CC test/event/reactor_perf/reactor_perf.o 00:03:24.513 CC test/env/memory/memory_ut.o 00:03:24.513 LINK event_perf 00:03:24.771 CC test/env/pci/pci_ut.o 00:03:24.771 LINK reactor 00:03:24.771 LINK reactor_perf 00:03:24.771 CXX test/cpp_headers/blobfs.o 00:03:25.029 CC test/event/app_repeat/app_repeat.o 00:03:25.029 LINK spdk_nvme_perf 00:03:25.029 CXX test/cpp_headers/blob.o 00:03:25.029 CC examples/sock/hello_world/hello_sock.o 00:03:25.029 CC examples/vmd/lsvmd/lsvmd.o 00:03:25.029 LINK app_repeat 00:03:25.029 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:25.288 LINK pci_ut 00:03:25.288 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:25.288 CC app/spdk_nvme_identify/identify.o 00:03:25.288 CC examples/idxd/perf/perf.o 00:03:25.288 CXX test/cpp_headers/conf.o 00:03:25.288 LINK lsvmd 00:03:25.546 LINK hello_sock 00:03:25.546 CXX test/cpp_headers/config.o 00:03:25.546 CXX test/cpp_headers/cpuset.o 00:03:25.546 CXX test/cpp_headers/crc16.o 00:03:25.546 CC test/event/scheduler/scheduler.o 00:03:25.803 CXX test/cpp_headers/crc32.o 00:03:25.803 LINK idxd_perf 00:03:25.803 CC examples/vmd/led/led.o 00:03:25.803 CXX test/cpp_headers/crc64.o 00:03:26.061 CXX test/cpp_headers/dif.o 00:03:26.061 LINK vhost_fuzz 00:03:26.061 LINK led 00:03:26.061 LINK memory_ut 00:03:26.061 CC test/rpc_client/rpc_client_test.o 00:03:26.061 LINK scheduler 00:03:26.061 CC test/nvme/aer/aer.o 00:03:26.061 CXX test/cpp_headers/dma.o 00:03:26.318 CC test/nvme/reset/reset.o 00:03:26.318 LINK rpc_client_test 00:03:26.318 CXX test/cpp_headers/endian.o 00:03:26.318 LINK aer 00:03:26.576 CC examples/accel/perf/accel_perf.o 00:03:26.576 CC test/accel/dif/dif.o 00:03:26.576 LINK iscsi_fuzz 00:03:26.576 CC test/nvme/sgl/sgl.o 00:03:26.576 CXX test/cpp_headers/env_dpdk.o 00:03:26.576 LINK reset 00:03:26.576 CXX test/cpp_headers/env.o 00:03:26.576 CC examples/blob/hello_world/hello_blob.o 00:03:26.576 CXX test/cpp_headers/event.o 00:03:26.576 LINK spdk_nvme_identify 00:03:26.834 LINK sgl 00:03:26.834 LINK hello_blob 00:03:26.834 CXX test/cpp_headers/fd_group.o 00:03:26.834 CC app/spdk_nvme_discover/discovery_aer.o 00:03:26.834 CC app/spdk_top/spdk_top.o 00:03:26.834 CC app/vhost/vhost.o 00:03:26.834 CC app/spdk_dd/spdk_dd.o 00:03:27.092 LINK dif 00:03:27.092 LINK accel_perf 00:03:27.092 CXX test/cpp_headers/fd.o 00:03:27.092 CC examples/blob/cli/blobcli.o 00:03:27.092 CC test/nvme/e2edp/nvme_dp.o 00:03:27.092 LINK spdk_nvme_discover 00:03:27.092 LINK vhost 00:03:27.351 CXX test/cpp_headers/file.o 00:03:27.351 CC test/blobfs/mkfs/mkfs.o 00:03:27.351 CXX test/cpp_headers/ftl.o 00:03:27.351 LINK nvme_dp 00:03:27.351 LINK spdk_dd 00:03:27.609 CC test/nvme/overhead/overhead.o 00:03:27.609 CC test/bdev/bdevio/bdevio.o 00:03:27.609 CC test/lvol/esnap/esnap.o 00:03:27.609 LINK mkfs 00:03:27.609 LINK blobcli 00:03:27.609 CXX test/cpp_headers/gpt_spec.o 00:03:27.609 CC app/fio/nvme/fio_plugin.o 00:03:27.609 CC test/nvme/err_injection/err_injection.o 00:03:27.867 LINK overhead 00:03:27.867 CXX test/cpp_headers/hexlify.o 00:03:27.867 LINK spdk_top 00:03:27.867 LINK bdevio 00:03:27.867 CC test/nvme/reserve/reserve.o 00:03:27.867 CC test/nvme/startup/startup.o 00:03:27.867 CXX test/cpp_headers/histogram_data.o 00:03:27.867 LINK err_injection 00:03:28.126 CC examples/nvme/hello_world/hello_world.o 00:03:28.126 CC test/nvme/simple_copy/simple_copy.o 00:03:28.126 LINK startup 00:03:28.126 CC test/nvme/connect_stress/connect_stress.o 00:03:28.126 LINK reserve 00:03:28.126 CXX test/cpp_headers/idxd.o 00:03:28.126 LINK spdk_nvme 00:03:28.384 CC test/nvme/boot_partition/boot_partition.o 00:03:28.384 CC test/nvme/compliance/nvme_compliance.o 00:03:28.384 LINK hello_world 00:03:28.384 LINK simple_copy 00:03:28.384 CXX test/cpp_headers/idxd_spec.o 00:03:28.384 LINK connect_stress 00:03:28.384 CC test/nvme/fused_ordering/fused_ordering.o 00:03:28.384 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:28.384 CC app/fio/bdev/fio_plugin.o 00:03:28.384 LINK boot_partition 00:03:28.644 CC examples/nvme/reconnect/reconnect.o 00:03:28.644 CXX test/cpp_headers/init.o 00:03:28.644 LINK fused_ordering 00:03:28.644 CC test/nvme/fdp/fdp.o 00:03:28.644 LINK doorbell_aers 00:03:28.644 LINK nvme_compliance 00:03:28.644 CXX test/cpp_headers/ioat.o 00:03:28.903 CC test/nvme/cuse/cuse.o 00:03:28.903 CXX test/cpp_headers/ioat_spec.o 00:03:28.903 CC examples/bdev/hello_world/hello_bdev.o 00:03:28.903 LINK reconnect 00:03:28.903 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:28.903 CC examples/nvme/arbitration/arbitration.o 00:03:28.903 LINK fdp 00:03:28.903 CXX test/cpp_headers/iscsi_spec.o 00:03:29.161 CC examples/bdev/bdevperf/bdevperf.o 00:03:29.161 LINK spdk_bdev 00:03:29.161 LINK hello_bdev 00:03:29.161 CXX test/cpp_headers/json.o 00:03:29.161 CC examples/nvme/hotplug/hotplug.o 00:03:29.420 CXX test/cpp_headers/jsonrpc.o 00:03:29.420 CXX test/cpp_headers/keyring.o 00:03:29.420 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:29.420 CXX test/cpp_headers/keyring_module.o 00:03:29.420 LINK arbitration 00:03:29.420 CXX test/cpp_headers/likely.o 00:03:29.420 LINK nvme_manage 00:03:29.420 LINK hotplug 00:03:29.680 CXX test/cpp_headers/log.o 00:03:29.680 CXX test/cpp_headers/lvol.o 00:03:29.680 LINK cmb_copy 00:03:29.680 CXX test/cpp_headers/memory.o 00:03:29.680 CXX test/cpp_headers/mmio.o 00:03:29.680 CXX test/cpp_headers/nbd.o 00:03:29.680 CXX test/cpp_headers/notify.o 00:03:29.680 CXX test/cpp_headers/nvme.o 00:03:29.680 CXX test/cpp_headers/nvme_intel.o 00:03:29.680 CC examples/nvme/abort/abort.o 00:03:29.938 CXX test/cpp_headers/nvme_ocssd.o 00:03:29.938 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:29.938 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:29.938 CXX test/cpp_headers/nvme_spec.o 00:03:29.938 LINK bdevperf 00:03:29.938 CXX test/cpp_headers/nvme_zns.o 00:03:29.938 CXX test/cpp_headers/nvmf_cmd.o 00:03:29.938 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:30.197 CXX test/cpp_headers/nvmf.o 00:03:30.197 LINK pmr_persistence 00:03:30.197 CXX test/cpp_headers/nvmf_spec.o 00:03:30.197 LINK cuse 00:03:30.197 LINK abort 00:03:30.197 CXX test/cpp_headers/nvmf_transport.o 00:03:30.197 CXX test/cpp_headers/opal.o 00:03:30.197 CXX test/cpp_headers/opal_spec.o 00:03:30.197 CXX test/cpp_headers/pci_ids.o 00:03:30.456 CXX test/cpp_headers/pipe.o 00:03:30.456 CXX test/cpp_headers/queue.o 00:03:30.456 CXX test/cpp_headers/reduce.o 00:03:30.456 CXX test/cpp_headers/rpc.o 00:03:30.456 CXX test/cpp_headers/scheduler.o 00:03:30.456 CXX test/cpp_headers/scsi.o 00:03:30.456 CXX test/cpp_headers/scsi_spec.o 00:03:30.456 CXX test/cpp_headers/sock.o 00:03:30.456 CXX test/cpp_headers/stdinc.o 00:03:30.456 CXX test/cpp_headers/string.o 00:03:30.714 CXX test/cpp_headers/thread.o 00:03:30.714 CXX test/cpp_headers/trace.o 00:03:30.714 CXX test/cpp_headers/trace_parser.o 00:03:30.714 CC examples/nvmf/nvmf/nvmf.o 00:03:30.714 CXX test/cpp_headers/tree.o 00:03:30.714 CXX test/cpp_headers/ublk.o 00:03:30.714 CXX test/cpp_headers/util.o 00:03:30.714 CXX test/cpp_headers/uuid.o 00:03:30.714 CXX test/cpp_headers/version.o 00:03:30.714 CXX test/cpp_headers/vfio_user_pci.o 00:03:30.714 CXX test/cpp_headers/vfio_user_spec.o 00:03:30.714 CXX test/cpp_headers/vhost.o 00:03:30.714 CXX test/cpp_headers/vmd.o 00:03:30.714 CXX test/cpp_headers/xor.o 00:03:30.714 CXX test/cpp_headers/zipf.o 00:03:30.972 LINK nvmf 00:03:32.901 LINK esnap 00:03:33.160 ************************************ 00:03:33.160 END TEST make 00:03:33.160 ************************************ 00:03:33.160 00:03:33.160 real 1m14.063s 00:03:33.160 user 7m54.376s 00:03:33.160 sys 1m50.599s 00:03:33.160 08:16:25 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:33.160 08:16:25 make -- common/autotest_common.sh@10 -- $ set +x 00:03:33.160 08:16:25 -- common/autotest_common.sh@1142 -- $ return 0 00:03:33.160 08:16:25 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:33.160 08:16:25 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:33.160 08:16:25 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:33.160 08:16:25 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:33.160 08:16:25 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:33.160 08:16:25 -- pm/common@44 -- $ pid=5133 00:03:33.160 08:16:25 -- pm/common@50 -- $ kill -TERM 5133 00:03:33.160 08:16:25 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:33.160 08:16:25 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:33.160 08:16:25 -- pm/common@44 -- $ pid=5135 00:03:33.160 08:16:25 -- pm/common@50 -- $ kill -TERM 5135 00:03:33.420 08:16:25 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:33.420 08:16:25 -- nvmf/common.sh@7 -- # uname -s 00:03:33.420 08:16:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:33.420 08:16:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:33.420 08:16:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:33.420 08:16:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:33.420 08:16:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:33.420 08:16:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:33.420 08:16:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:33.420 08:16:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:33.420 08:16:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:33.420 08:16:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:33.420 08:16:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:03:33.420 08:16:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:03:33.420 08:16:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:33.420 08:16:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:33.420 08:16:25 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:03:33.420 08:16:25 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:33.420 08:16:25 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:33.420 08:16:25 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:33.420 08:16:25 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:33.420 08:16:25 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:33.420 08:16:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:33.420 08:16:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:33.420 08:16:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:33.420 08:16:25 -- paths/export.sh@5 -- # export PATH 00:03:33.420 08:16:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:33.420 08:16:25 -- nvmf/common.sh@47 -- # : 0 00:03:33.420 08:16:25 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:33.420 08:16:25 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:33.420 08:16:25 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:33.420 08:16:25 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:33.420 08:16:25 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:33.420 08:16:25 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:33.420 08:16:25 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:33.420 08:16:25 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:33.420 08:16:25 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:33.420 08:16:25 -- spdk/autotest.sh@32 -- # uname -s 00:03:33.420 08:16:25 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:33.420 08:16:25 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:33.420 08:16:25 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:33.420 08:16:25 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:33.420 08:16:25 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:33.420 08:16:25 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:33.420 08:16:25 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:33.420 08:16:25 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:33.420 08:16:25 -- spdk/autotest.sh@48 -- # udevadm_pid=52854 00:03:33.420 08:16:25 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:33.420 08:16:25 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:33.420 08:16:25 -- pm/common@17 -- # local monitor 00:03:33.420 08:16:25 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:33.420 08:16:25 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:33.420 08:16:25 -- pm/common@25 -- # sleep 1 00:03:33.420 08:16:25 -- pm/common@21 -- # date +%s 00:03:33.420 08:16:25 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721031385 00:03:33.420 08:16:25 -- pm/common@21 -- # date +%s 00:03:33.420 08:16:25 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721031385 00:03:33.420 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721031385_collect-vmstat.pm.log 00:03:33.420 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721031385_collect-cpu-load.pm.log 00:03:34.357 08:16:26 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:34.357 08:16:26 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:34.357 08:16:26 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:34.357 08:16:26 -- common/autotest_common.sh@10 -- # set +x 00:03:34.357 08:16:26 -- spdk/autotest.sh@59 -- # create_test_list 00:03:34.357 08:16:26 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:34.357 08:16:26 -- common/autotest_common.sh@10 -- # set +x 00:03:34.617 08:16:26 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:34.617 08:16:26 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:34.617 08:16:26 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:34.617 08:16:26 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:34.617 08:16:26 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:34.617 08:16:26 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:34.617 08:16:26 -- common/autotest_common.sh@1455 -- # uname 00:03:34.617 08:16:26 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:34.617 08:16:26 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:34.617 08:16:26 -- common/autotest_common.sh@1475 -- # uname 00:03:34.617 08:16:26 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:34.617 08:16:26 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:34.617 08:16:26 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:34.617 08:16:26 -- spdk/autotest.sh@72 -- # hash lcov 00:03:34.617 08:16:26 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:34.617 08:16:26 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:34.617 --rc lcov_branch_coverage=1 00:03:34.617 --rc lcov_function_coverage=1 00:03:34.617 --rc genhtml_branch_coverage=1 00:03:34.617 --rc genhtml_function_coverage=1 00:03:34.617 --rc genhtml_legend=1 00:03:34.617 --rc geninfo_all_blocks=1 00:03:34.617 ' 00:03:34.617 08:16:26 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:34.617 --rc lcov_branch_coverage=1 00:03:34.617 --rc lcov_function_coverage=1 00:03:34.617 --rc genhtml_branch_coverage=1 00:03:34.617 --rc genhtml_function_coverage=1 00:03:34.617 --rc genhtml_legend=1 00:03:34.617 --rc geninfo_all_blocks=1 00:03:34.617 ' 00:03:34.617 08:16:26 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:34.617 --rc lcov_branch_coverage=1 00:03:34.617 --rc lcov_function_coverage=1 00:03:34.617 --rc genhtml_branch_coverage=1 00:03:34.617 --rc genhtml_function_coverage=1 00:03:34.617 --rc genhtml_legend=1 00:03:34.617 --rc geninfo_all_blocks=1 00:03:34.617 --no-external' 00:03:34.617 08:16:26 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:34.617 --rc lcov_branch_coverage=1 00:03:34.617 --rc lcov_function_coverage=1 00:03:34.617 --rc genhtml_branch_coverage=1 00:03:34.617 --rc genhtml_function_coverage=1 00:03:34.617 --rc genhtml_legend=1 00:03:34.617 --rc geninfo_all_blocks=1 00:03:34.617 --no-external' 00:03:34.617 08:16:26 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:34.617 lcov: LCOV version 1.14 00:03:34.617 08:16:26 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:52.692 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:52.692 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:04.896 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:04.896 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:04:04.896 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:04.896 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:04:04.896 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:04.896 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:04:04.896 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:04.896 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:04:04.896 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:04.896 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:04:04.896 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:04.896 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:04:04.896 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:04.896 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:04:04.896 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:04.896 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:04:04.896 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:04.896 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:04:04.896 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:04.896 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:04:04.896 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:04.896 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:04:04.896 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:04.896 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:04.896 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:04.896 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:04:04.896 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:04.896 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:04:04.896 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:04.896 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:04:04.896 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:04:04.896 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:04:04.896 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:04.896 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:04:04.896 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:04.896 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:04:04.896 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:04.896 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:04:04.896 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:04.896 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:04:04.896 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:04.896 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:04:04.896 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:04.896 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:04:04.896 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:04.896 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:04:04.896 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:04.896 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:04:04.896 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:04:04.896 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:04:04.896 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:04:04.896 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:04:04.897 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:04.897 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:04:04.897 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:04.897 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:04:04.897 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:04:04.897 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:04:04.897 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:04.897 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:04:04.897 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:04.897 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:04:04.897 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:04.897 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:04:04.897 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:04.897 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:04:04.897 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:04.897 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:04:04.897 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:04.897 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:04:04.897 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:04:04.897 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:04:04.897 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:04.897 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:04:04.897 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:04.897 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:04:04.897 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:04.897 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:04.897 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:04:04.897 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:04:04.897 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:04.897 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:04:04.897 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:04:04.897 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:04:04.897 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:04:04.897 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:04:04.897 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:04.897 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:04:04.897 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:04:04.897 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:04:04.897 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:04.897 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:04:04.897 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:04.897 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:04:04.897 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:04.897 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:04:04.897 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:04.897 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:04:04.897 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:04.897 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:04:04.897 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:04.897 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:04:04.897 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:04.897 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:04:04.897 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:04.897 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:04.897 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:04.897 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:04.897 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:04.897 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:04:04.897 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:04.897 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:04:04.897 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:04.897 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:04.897 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:04.897 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:04.897 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:04.897 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:04:04.897 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:04.897 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:04.897 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:04.897 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:04.897 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:04.897 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:04:04.897 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:04.897 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:04:04.897 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:04.897 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:04:04.897 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:04.897 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:04:04.897 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:04.897 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:04:04.897 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:04.897 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:04:04.897 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:04.897 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:04:04.897 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:04.897 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:04:04.897 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:04.897 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:04:04.897 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:04.897 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:04:04.897 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:04.897 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:04:04.897 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:04.897 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:04:04.897 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:04:04.897 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:04:04.897 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:04.897 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:04:04.897 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:04.897 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:04:04.897 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:04.897 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:04:04.897 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:04.897 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:04:04.897 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:04.897 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:04:04.897 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:04:04.897 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:04:04.897 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:04.897 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:04:04.897 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:04:04.897 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:04:04.897 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:04.897 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:04.897 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:04.898 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:04.898 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:04.898 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:04:04.898 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:04.898 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:04:04.898 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:04.898 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:04:04.898 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:04.898 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:04:07.439 08:16:59 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:07.439 08:16:59 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:07.439 08:16:59 -- common/autotest_common.sh@10 -- # set +x 00:04:07.439 08:16:59 -- spdk/autotest.sh@91 -- # rm -f 00:04:07.439 08:16:59 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:08.005 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:08.005 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:08.005 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:08.261 08:17:00 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:08.261 08:17:00 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:08.261 08:17:00 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:08.261 08:17:00 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:08.261 08:17:00 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:08.261 08:17:00 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:08.261 08:17:00 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:08.261 08:17:00 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:08.261 08:17:00 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:08.261 08:17:00 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:08.261 08:17:00 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:04:08.261 08:17:00 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:04:08.261 08:17:00 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:08.261 08:17:00 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:08.261 08:17:00 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:08.261 08:17:00 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:04:08.261 08:17:00 -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:04:08.261 08:17:00 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:08.261 08:17:00 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:08.261 08:17:00 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:08.261 08:17:00 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:04:08.261 08:17:00 -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:04:08.261 08:17:00 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:08.261 08:17:00 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:08.261 08:17:00 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:08.261 08:17:00 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:08.261 08:17:00 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:08.261 08:17:00 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:08.261 08:17:00 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:08.261 08:17:00 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:08.261 No valid GPT data, bailing 00:04:08.261 08:17:00 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:08.261 08:17:00 -- scripts/common.sh@391 -- # pt= 00:04:08.261 08:17:00 -- scripts/common.sh@392 -- # return 1 00:04:08.261 08:17:00 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:08.261 1+0 records in 00:04:08.261 1+0 records out 00:04:08.261 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00541432 s, 194 MB/s 00:04:08.261 08:17:00 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:08.261 08:17:00 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:08.261 08:17:00 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:04:08.261 08:17:00 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:04:08.261 08:17:00 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:08.261 No valid GPT data, bailing 00:04:08.261 08:17:00 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:08.261 08:17:00 -- scripts/common.sh@391 -- # pt= 00:04:08.261 08:17:00 -- scripts/common.sh@392 -- # return 1 00:04:08.261 08:17:00 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:08.261 1+0 records in 00:04:08.261 1+0 records out 00:04:08.261 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00433347 s, 242 MB/s 00:04:08.261 08:17:00 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:08.261 08:17:00 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:08.261 08:17:00 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:04:08.261 08:17:00 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:04:08.261 08:17:00 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:08.261 No valid GPT data, bailing 00:04:08.261 08:17:00 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:08.261 08:17:00 -- scripts/common.sh@391 -- # pt= 00:04:08.261 08:17:00 -- scripts/common.sh@392 -- # return 1 00:04:08.261 08:17:00 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:08.261 1+0 records in 00:04:08.261 1+0 records out 00:04:08.261 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00430069 s, 244 MB/s 00:04:08.261 08:17:00 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:08.261 08:17:00 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:08.261 08:17:00 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:04:08.261 08:17:00 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:04:08.261 08:17:00 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:08.542 No valid GPT data, bailing 00:04:08.542 08:17:00 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:08.542 08:17:00 -- scripts/common.sh@391 -- # pt= 00:04:08.542 08:17:00 -- scripts/common.sh@392 -- # return 1 00:04:08.542 08:17:00 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:08.542 1+0 records in 00:04:08.542 1+0 records out 00:04:08.542 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00470632 s, 223 MB/s 00:04:08.542 08:17:00 -- spdk/autotest.sh@118 -- # sync 00:04:08.542 08:17:00 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:08.542 08:17:00 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:08.542 08:17:00 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:10.444 08:17:02 -- spdk/autotest.sh@124 -- # uname -s 00:04:10.444 08:17:02 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:10.444 08:17:02 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:10.444 08:17:02 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:10.444 08:17:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:10.444 08:17:02 -- common/autotest_common.sh@10 -- # set +x 00:04:10.444 ************************************ 00:04:10.444 START TEST setup.sh 00:04:10.444 ************************************ 00:04:10.444 08:17:02 setup.sh -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:10.444 * Looking for test storage... 00:04:10.444 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:10.444 08:17:02 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:10.444 08:17:02 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:10.444 08:17:02 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:10.444 08:17:02 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:10.444 08:17:02 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:10.444 08:17:02 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:10.444 ************************************ 00:04:10.444 START TEST acl 00:04:10.444 ************************************ 00:04:10.444 08:17:02 setup.sh.acl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:10.444 * Looking for test storage... 00:04:10.444 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:10.444 08:17:02 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:10.444 08:17:02 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:10.444 08:17:02 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:10.444 08:17:02 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:10.444 08:17:02 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:10.444 08:17:02 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:10.444 08:17:02 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:10.444 08:17:02 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:10.444 08:17:02 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:10.444 08:17:02 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:10.444 08:17:02 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:04:10.444 08:17:02 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:04:10.444 08:17:02 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:10.444 08:17:02 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:10.444 08:17:02 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:10.444 08:17:02 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:04:10.444 08:17:02 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:04:10.444 08:17:02 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:10.444 08:17:02 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:10.444 08:17:02 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:10.444 08:17:02 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:04:10.444 08:17:02 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:04:10.444 08:17:02 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:10.444 08:17:02 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:10.444 08:17:02 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:10.444 08:17:02 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:10.444 08:17:02 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:10.444 08:17:02 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:10.444 08:17:02 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:10.444 08:17:02 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:10.444 08:17:02 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:11.375 08:17:03 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:11.375 08:17:03 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:11.375 08:17:03 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:11.375 08:17:03 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:11.375 08:17:03 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:11.375 08:17:03 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:11.941 08:17:03 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:04:11.941 08:17:03 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:11.941 08:17:03 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:11.941 Hugepages 00:04:11.941 node hugesize free / total 00:04:11.941 08:17:03 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:11.941 08:17:03 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:11.941 08:17:03 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:11.941 00:04:11.941 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:11.941 08:17:03 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:11.941 08:17:03 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:11.941 08:17:03 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:11.941 08:17:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:04:11.941 08:17:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:04:11.941 08:17:04 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:11.941 08:17:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:11.941 08:17:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:04:11.941 08:17:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:11.941 08:17:04 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:04:11.941 08:17:04 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:11.941 08:17:04 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:11.941 08:17:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:12.199 08:17:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:04:12.199 08:17:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:12.199 08:17:04 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:12.199 08:17:04 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:12.199 08:17:04 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:12.199 08:17:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:12.199 08:17:04 setup.sh.acl -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:04:12.199 08:17:04 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:12.199 08:17:04 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:12.199 08:17:04 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:12.199 08:17:04 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:12.199 ************************************ 00:04:12.199 START TEST denied 00:04:12.199 ************************************ 00:04:12.199 08:17:04 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:04:12.199 08:17:04 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:04:12.199 08:17:04 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:12.199 08:17:04 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:12.199 08:17:04 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:04:12.199 08:17:04 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:13.170 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:04:13.170 08:17:05 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:04:13.170 08:17:05 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:13.170 08:17:05 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:13.170 08:17:05 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:04:13.170 08:17:05 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:04:13.170 08:17:05 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:13.170 08:17:05 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:13.170 08:17:05 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:13.170 08:17:05 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:13.170 08:17:05 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:13.428 00:04:13.428 real 0m1.413s 00:04:13.428 user 0m0.581s 00:04:13.428 sys 0m0.787s 00:04:13.428 08:17:05 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:13.428 08:17:05 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:13.428 ************************************ 00:04:13.428 END TEST denied 00:04:13.428 ************************************ 00:04:13.685 08:17:05 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:13.685 08:17:05 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:13.685 08:17:05 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:13.685 08:17:05 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:13.685 08:17:05 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:13.685 ************************************ 00:04:13.685 START TEST allowed 00:04:13.685 ************************************ 00:04:13.685 08:17:05 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:04:13.685 08:17:05 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:04:13.685 08:17:05 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:13.685 08:17:05 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:13.685 08:17:05 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:13.686 08:17:05 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:04:14.249 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:14.249 08:17:06 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:04:14.249 08:17:06 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:14.249 08:17:06 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:04:14.249 08:17:06 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:04:14.249 08:17:06 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:04:14.249 08:17:06 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:14.508 08:17:06 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:14.508 08:17:06 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:14.508 08:17:06 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:14.508 08:17:06 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:15.075 00:04:15.075 real 0m1.509s 00:04:15.075 user 0m0.642s 00:04:15.075 sys 0m0.857s 00:04:15.075 08:17:07 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:15.075 08:17:07 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:15.075 ************************************ 00:04:15.075 END TEST allowed 00:04:15.075 ************************************ 00:04:15.075 08:17:07 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:15.075 00:04:15.075 real 0m4.734s 00:04:15.075 user 0m2.077s 00:04:15.075 sys 0m2.603s 00:04:15.075 08:17:07 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:15.075 08:17:07 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:15.075 ************************************ 00:04:15.075 END TEST acl 00:04:15.075 ************************************ 00:04:15.075 08:17:07 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:15.075 08:17:07 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:15.075 08:17:07 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:15.075 08:17:07 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:15.075 08:17:07 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:15.075 ************************************ 00:04:15.075 START TEST hugepages 00:04:15.075 ************************************ 00:04:15.075 08:17:07 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:15.335 * Looking for test storage... 00:04:15.335 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6009436 kB' 'MemAvailable: 7390776 kB' 'Buffers: 2436 kB' 'Cached: 1595596 kB' 'SwapCached: 0 kB' 'Active: 436052 kB' 'Inactive: 1266696 kB' 'Active(anon): 115208 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1266696 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'AnonPages: 106640 kB' 'Mapped: 48796 kB' 'Shmem: 10488 kB' 'KReclaimable: 61480 kB' 'Slab: 132404 kB' 'SReclaimable: 61480 kB' 'SUnreclaim: 70924 kB' 'KernelStack: 6300 kB' 'PageTables: 4320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412440 kB' 'Committed_AS: 337408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 6115328 kB' 'DirectMap1G: 8388608 kB' 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:15.335 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:15.336 08:17:07 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:15.337 08:17:07 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:15.337 08:17:07 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:15.337 08:17:07 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:15.337 08:17:07 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:15.337 08:17:07 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:15.337 08:17:07 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:15.337 08:17:07 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:15.337 08:17:07 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:15.337 08:17:07 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:15.337 08:17:07 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:15.337 08:17:07 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:15.337 08:17:07 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:15.337 08:17:07 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:15.337 ************************************ 00:04:15.337 START TEST default_setup 00:04:15.337 ************************************ 00:04:15.337 08:17:07 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:04:15.337 08:17:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:15.337 08:17:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:15.337 08:17:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:15.337 08:17:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:15.337 08:17:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:15.337 08:17:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:15.337 08:17:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:15.337 08:17:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:15.337 08:17:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:15.337 08:17:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:15.337 08:17:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:15.337 08:17:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:15.337 08:17:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:15.337 08:17:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:15.337 08:17:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:15.337 08:17:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:15.337 08:17:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:15.337 08:17:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:15.337 08:17:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:15.337 08:17:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:15.337 08:17:07 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:15.337 08:17:07 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:15.903 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:16.165 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:16.165 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:16.165 08:17:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:16.165 08:17:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:16.165 08:17:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:16.165 08:17:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:16.165 08:17:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:16.165 08:17:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:16.165 08:17:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:16.165 08:17:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:16.165 08:17:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:16.165 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:16.165 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:16.165 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:16.165 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:16.165 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.165 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.165 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.165 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.165 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.165 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.165 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.165 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8107232 kB' 'MemAvailable: 9488512 kB' 'Buffers: 2436 kB' 'Cached: 1595588 kB' 'SwapCached: 0 kB' 'Active: 453228 kB' 'Inactive: 1266700 kB' 'Active(anon): 132384 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1266700 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 123472 kB' 'Mapped: 48656 kB' 'Shmem: 10468 kB' 'KReclaimable: 61348 kB' 'Slab: 132248 kB' 'SReclaimable: 61348 kB' 'SUnreclaim: 70900 kB' 'KernelStack: 6240 kB' 'PageTables: 4260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 6115328 kB' 'DirectMap1G: 8388608 kB' 00:04:16.165 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.165 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.165 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.165 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.165 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.165 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.165 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.165 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.165 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.165 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.165 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.165 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.165 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.165 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.165 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.165 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.165 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.165 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.165 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.165 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.165 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.165 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.165 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.165 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.165 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.165 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.165 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.165 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.165 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.165 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.165 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.165 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.165 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.165 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.165 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.165 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.165 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.165 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.165 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.165 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.165 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.165 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.165 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.165 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.165 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.165 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.165 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.165 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.165 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.165 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.165 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.165 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.165 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.165 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.165 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.165 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.165 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.165 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.165 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.165 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.165 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.165 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.165 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.165 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.165 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.165 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.165 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.165 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.165 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.165 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.165 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.165 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.165 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8107492 kB' 'MemAvailable: 9488772 kB' 'Buffers: 2436 kB' 'Cached: 1595588 kB' 'SwapCached: 0 kB' 'Active: 453096 kB' 'Inactive: 1266700 kB' 'Active(anon): 132252 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1266700 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 123340 kB' 'Mapped: 48656 kB' 'Shmem: 10468 kB' 'KReclaimable: 61348 kB' 'Slab: 132244 kB' 'SReclaimable: 61348 kB' 'SUnreclaim: 70896 kB' 'KernelStack: 6256 kB' 'PageTables: 4300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 6115328 kB' 'DirectMap1G: 8388608 kB' 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.166 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.167 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8107492 kB' 'MemAvailable: 9488780 kB' 'Buffers: 2436 kB' 'Cached: 1595588 kB' 'SwapCached: 0 kB' 'Active: 452852 kB' 'Inactive: 1266708 kB' 'Active(anon): 132008 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1266708 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 123112 kB' 'Mapped: 48600 kB' 'Shmem: 10468 kB' 'KReclaimable: 61348 kB' 'Slab: 132240 kB' 'SReclaimable: 61348 kB' 'SUnreclaim: 70892 kB' 'KernelStack: 6256 kB' 'PageTables: 4288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 6115328 kB' 'DirectMap1G: 8388608 kB' 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.168 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.169 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.170 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.170 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.170 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.170 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.170 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.170 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.170 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.170 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.170 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.170 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.170 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.170 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.170 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.170 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.170 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.170 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.170 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.170 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.170 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.170 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.170 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.170 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.170 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.170 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.170 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.170 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.170 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.170 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.170 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.170 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.170 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.170 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.170 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.170 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.170 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.170 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.170 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.170 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.170 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.170 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.431 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.431 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.431 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.431 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.431 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.431 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.431 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.431 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.431 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.431 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.431 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.431 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.431 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.431 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:16.431 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:16.431 nr_hugepages=1024 00:04:16.431 08:17:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:16.431 08:17:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:16.431 resv_hugepages=0 00:04:16.431 08:17:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:16.431 surplus_hugepages=0 00:04:16.431 anon_hugepages=0 00:04:16.431 08:17:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:16.431 08:17:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:16.431 08:17:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:16.431 08:17:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:16.431 08:17:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:16.431 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:16.431 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:16.431 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:16.431 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:16.431 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.431 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.431 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.431 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.431 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.431 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.431 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.431 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8107492 kB' 'MemAvailable: 9488780 kB' 'Buffers: 2436 kB' 'Cached: 1595588 kB' 'SwapCached: 0 kB' 'Active: 452992 kB' 'Inactive: 1266708 kB' 'Active(anon): 132148 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1266708 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 123316 kB' 'Mapped: 48600 kB' 'Shmem: 10468 kB' 'KReclaimable: 61348 kB' 'Slab: 132244 kB' 'SReclaimable: 61348 kB' 'SUnreclaim: 70896 kB' 'KernelStack: 6320 kB' 'PageTables: 4480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 6115328 kB' 'DirectMap1G: 8388608 kB' 00:04:16.431 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.431 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.431 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.431 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.431 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.431 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.431 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.431 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.431 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.431 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.431 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.431 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.431 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.431 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.431 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.431 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.431 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.431 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.431 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.432 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8107492 kB' 'MemUsed: 4134484 kB' 'SwapCached: 0 kB' 'Active: 453000 kB' 'Inactive: 1266712 kB' 'Active(anon): 132156 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1266712 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'FilePages: 1598028 kB' 'Mapped: 48608 kB' 'AnonPages: 123292 kB' 'Shmem: 10468 kB' 'KernelStack: 6288 kB' 'PageTables: 4396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61348 kB' 'Slab: 132244 kB' 'SReclaimable: 61348 kB' 'SUnreclaim: 70896 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.433 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.434 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.435 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.435 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.435 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.435 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.435 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.435 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.435 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.435 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.435 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.435 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.435 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.435 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.435 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.435 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.435 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.435 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.435 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.435 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.435 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.435 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.435 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.435 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.435 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.435 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.435 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.435 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.435 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.435 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:16.435 08:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:16.435 08:17:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:16.435 08:17:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:16.435 08:17:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:16.435 node0=1024 expecting 1024 00:04:16.435 ************************************ 00:04:16.435 END TEST default_setup 00:04:16.435 08:17:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:16.435 08:17:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:16.435 08:17:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:16.435 00:04:16.435 real 0m1.031s 00:04:16.435 user 0m0.473s 00:04:16.435 sys 0m0.484s 00:04:16.435 08:17:08 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:16.435 08:17:08 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:16.435 ************************************ 00:04:16.435 08:17:08 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:16.435 08:17:08 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:16.435 08:17:08 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:16.435 08:17:08 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:16.435 08:17:08 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:16.435 ************************************ 00:04:16.435 START TEST per_node_1G_alloc 00:04:16.435 ************************************ 00:04:16.435 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:04:16.435 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:16.435 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:04:16.435 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:16.435 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:16.435 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:16.435 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:16.435 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:16.435 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:16.435 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:16.435 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:16.435 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:16.435 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:16.435 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:16.435 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:16.435 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:16.435 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:16.435 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:16.435 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:16.435 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:16.435 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:16.435 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:16.435 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:04:16.435 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:16.435 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:16.435 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:16.695 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:16.695 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:16.695 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:16.695 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:04:16.695 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:16.695 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:16.695 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:16.695 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:16.695 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:16.695 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:16.695 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:16.695 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:16.695 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:16.695 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:16.695 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:16.695 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:16.695 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:16.695 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.695 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.695 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.695 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.695 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.695 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.695 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9156116 kB' 'MemAvailable: 10537420 kB' 'Buffers: 2436 kB' 'Cached: 1595592 kB' 'SwapCached: 0 kB' 'Active: 453320 kB' 'Inactive: 1266716 kB' 'Active(anon): 132476 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1266716 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 123624 kB' 'Mapped: 48668 kB' 'Shmem: 10468 kB' 'KReclaimable: 61364 kB' 'Slab: 132336 kB' 'SReclaimable: 61364 kB' 'SUnreclaim: 70972 kB' 'KernelStack: 6248 kB' 'PageTables: 4420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 356476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 6115328 kB' 'DirectMap1G: 8388608 kB' 00:04:16.695 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.695 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.695 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.695 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.695 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.695 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.695 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.695 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.695 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.695 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.695 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.695 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.695 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.695 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.695 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.695 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.695 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.695 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.695 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.695 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.695 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.695 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.695 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.695 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.695 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.695 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.695 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.695 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.695 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.695 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.695 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.695 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.695 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.695 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.695 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.695 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.695 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.695 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.695 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.695 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.695 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.695 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.695 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.695 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.695 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.695 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.695 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.695 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.695 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.695 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.695 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.695 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.695 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.695 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.695 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.695 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.695 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.695 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.695 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.695 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.695 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.695 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.695 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.695 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.695 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.696 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.696 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.696 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.696 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.696 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.696 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.696 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.696 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.696 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.696 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.696 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.696 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.696 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.696 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.696 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.696 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.696 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.696 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.696 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.696 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.696 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.696 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.696 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.696 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.696 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.696 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.696 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.696 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.696 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.696 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.696 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.696 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.696 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.696 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.696 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.696 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.696 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.696 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.696 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.696 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.696 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.696 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.696 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.696 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.696 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.696 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.696 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.696 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.696 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.696 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.696 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.696 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.696 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.696 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.696 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.696 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.696 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.696 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.696 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.696 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.696 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.696 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.696 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.696 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.696 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.959 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.959 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.959 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.959 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.959 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.959 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.959 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.959 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.959 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.959 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.959 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.959 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.959 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.959 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.959 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.959 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.959 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.959 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.959 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.959 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.959 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.959 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.959 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.959 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.959 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.959 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.959 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.959 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.959 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.959 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.959 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.959 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.959 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:16.959 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:16.959 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:16.959 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:16.959 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9156116 kB' 'MemAvailable: 10537420 kB' 'Buffers: 2436 kB' 'Cached: 1595592 kB' 'SwapCached: 0 kB' 'Active: 453480 kB' 'Inactive: 1266716 kB' 'Active(anon): 132636 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1266716 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 124000 kB' 'Mapped: 48728 kB' 'Shmem: 10468 kB' 'KReclaimable: 61364 kB' 'Slab: 132332 kB' 'SReclaimable: 61364 kB' 'SUnreclaim: 70968 kB' 'KernelStack: 6216 kB' 'PageTables: 4320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 354408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 6115328 kB' 'DirectMap1G: 8388608 kB' 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.960 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.961 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9156116 kB' 'MemAvailable: 10537424 kB' 'Buffers: 2436 kB' 'Cached: 1595596 kB' 'SwapCached: 0 kB' 'Active: 453036 kB' 'Inactive: 1266720 kB' 'Active(anon): 132192 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1266720 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 123384 kB' 'Mapped: 48488 kB' 'Shmem: 10468 kB' 'KReclaimable: 61364 kB' 'Slab: 132332 kB' 'SReclaimable: 61364 kB' 'SUnreclaim: 70968 kB' 'KernelStack: 6260 kB' 'PageTables: 4388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 354408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54580 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 6115328 kB' 'DirectMap1G: 8388608 kB' 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.962 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:16.963 nr_hugepages=512 00:04:16.963 resv_hugepages=0 00:04:16.963 surplus_hugepages=0 00:04:16.963 anon_hugepages=0 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.963 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9156636 kB' 'MemAvailable: 10537944 kB' 'Buffers: 2436 kB' 'Cached: 1595596 kB' 'SwapCached: 0 kB' 'Active: 453080 kB' 'Inactive: 1266720 kB' 'Active(anon): 132236 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1266720 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 123388 kB' 'Mapped: 48488 kB' 'Shmem: 10468 kB' 'KReclaimable: 61364 kB' 'Slab: 132332 kB' 'SReclaimable: 61364 kB' 'SUnreclaim: 70968 kB' 'KernelStack: 6260 kB' 'PageTables: 4388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 354408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 6115328 kB' 'DirectMap1G: 8388608 kB' 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.964 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9156764 kB' 'MemUsed: 3085212 kB' 'SwapCached: 0 kB' 'Active: 452708 kB' 'Inactive: 1266720 kB' 'Active(anon): 131864 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1266720 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'FilePages: 1598032 kB' 'Mapped: 48488 kB' 'AnonPages: 123012 kB' 'Shmem: 10468 kB' 'KernelStack: 6244 kB' 'PageTables: 4340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61364 kB' 'Slab: 132332 kB' 'SReclaimable: 61364 kB' 'SUnreclaim: 70968 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.965 08:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.966 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.967 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.967 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.967 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.967 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.967 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.967 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.967 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:16.967 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.967 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.967 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.967 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:16.967 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:16.967 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:16.967 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:16.967 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:16.967 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:16.967 node0=512 expecting 512 00:04:16.967 ************************************ 00:04:16.967 END TEST per_node_1G_alloc 00:04:16.967 ************************************ 00:04:16.967 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:16.967 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:16.967 00:04:16.967 real 0m0.574s 00:04:16.967 user 0m0.290s 00:04:16.967 sys 0m0.296s 00:04:16.967 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:16.967 08:17:09 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:16.967 08:17:09 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:16.967 08:17:09 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:16.967 08:17:09 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:16.967 08:17:09 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:16.967 08:17:09 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:16.967 ************************************ 00:04:16.967 START TEST even_2G_alloc 00:04:16.967 ************************************ 00:04:16.967 08:17:09 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:04:16.967 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:16.967 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:16.967 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:16.967 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:16.967 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:16.967 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:16.967 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:16.967 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:16.967 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:16.967 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:16.967 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:16.967 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:16.967 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:16.967 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:16.967 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:16.967 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:04:16.967 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:16.967 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:16.967 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:16.967 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:16.967 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:16.967 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:16.967 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:16.967 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:17.542 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:17.542 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:17.542 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:17.542 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:17.542 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:17.542 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:17.542 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:17.542 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:17.542 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:17.542 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:17.542 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:17.542 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:17.542 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:17.542 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:17.542 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:17.542 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:17.542 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.542 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.542 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.542 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.542 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.542 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.542 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8109604 kB' 'MemAvailable: 9490832 kB' 'Buffers: 2436 kB' 'Cached: 1595588 kB' 'SwapCached: 0 kB' 'Active: 453296 kB' 'Inactive: 1266716 kB' 'Active(anon): 132452 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1266716 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 123280 kB' 'Mapped: 48636 kB' 'Shmem: 10464 kB' 'KReclaimable: 61212 kB' 'Slab: 132244 kB' 'SReclaimable: 61212 kB' 'SUnreclaim: 71032 kB' 'KernelStack: 6260 kB' 'PageTables: 4380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 6115328 kB' 'DirectMap1G: 8388608 kB' 00:04:17.542 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.542 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.542 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.542 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.542 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.542 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.542 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.542 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.542 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.542 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8109604 kB' 'MemAvailable: 9490832 kB' 'Buffers: 2436 kB' 'Cached: 1595588 kB' 'SwapCached: 0 kB' 'Active: 452640 kB' 'Inactive: 1266716 kB' 'Active(anon): 131796 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1266716 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 123196 kB' 'Mapped: 48604 kB' 'Shmem: 10464 kB' 'KReclaimable: 61212 kB' 'Slab: 132248 kB' 'SReclaimable: 61212 kB' 'SUnreclaim: 71036 kB' 'KernelStack: 6256 kB' 'PageTables: 4292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 6115328 kB' 'DirectMap1G: 8388608 kB' 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.543 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.544 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8109604 kB' 'MemAvailable: 9490832 kB' 'Buffers: 2436 kB' 'Cached: 1595588 kB' 'SwapCached: 0 kB' 'Active: 452624 kB' 'Inactive: 1266716 kB' 'Active(anon): 131780 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1266716 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122940 kB' 'Mapped: 48604 kB' 'Shmem: 10464 kB' 'KReclaimable: 61212 kB' 'Slab: 132248 kB' 'SReclaimable: 61212 kB' 'SUnreclaim: 71036 kB' 'KernelStack: 6256 kB' 'PageTables: 4292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 6115328 kB' 'DirectMap1G: 8388608 kB' 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.545 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:17.546 nr_hugepages=1024 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:17.546 resv_hugepages=0 00:04:17.546 surplus_hugepages=0 00:04:17.546 anon_hugepages=0 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8109604 kB' 'MemAvailable: 9490832 kB' 'Buffers: 2436 kB' 'Cached: 1595588 kB' 'SwapCached: 0 kB' 'Active: 452636 kB' 'Inactive: 1266716 kB' 'Active(anon): 131792 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1266716 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 123212 kB' 'Mapped: 48604 kB' 'Shmem: 10464 kB' 'KReclaimable: 61212 kB' 'Slab: 132248 kB' 'SReclaimable: 61212 kB' 'SUnreclaim: 71036 kB' 'KernelStack: 6256 kB' 'PageTables: 4292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 6115328 kB' 'DirectMap1G: 8388608 kB' 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.546 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8109604 kB' 'MemUsed: 4132372 kB' 'SwapCached: 0 kB' 'Active: 452860 kB' 'Inactive: 1266716 kB' 'Active(anon): 132016 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1266716 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'FilePages: 1598024 kB' 'Mapped: 48604 kB' 'AnonPages: 123180 kB' 'Shmem: 10464 kB' 'KernelStack: 6240 kB' 'PageTables: 4248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61212 kB' 'Slab: 132248 kB' 'SReclaimable: 61212 kB' 'SUnreclaim: 71036 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.547 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.548 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.548 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.548 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.548 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.548 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.548 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.548 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.548 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.548 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.548 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.548 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.548 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.548 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.548 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.548 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.548 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.548 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.548 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.548 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.548 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.548 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.548 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.548 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.548 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.548 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.548 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.548 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.548 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.548 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.548 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.548 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.548 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.548 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.548 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.548 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.548 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.548 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.548 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.548 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.548 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.548 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.548 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.548 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.548 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.548 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.548 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.548 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.548 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.548 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.548 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.548 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.548 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.548 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.548 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.548 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.548 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.548 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.548 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.548 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:17.548 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.548 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.548 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.548 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:17.548 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:17.548 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:17.548 node0=1024 expecting 1024 00:04:17.548 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:17.548 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:17.548 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:17.548 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:17.548 08:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:17.548 00:04:17.548 real 0m0.573s 00:04:17.548 user 0m0.277s 00:04:17.548 sys 0m0.298s 00:04:17.548 08:17:09 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:17.548 08:17:09 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:17.548 ************************************ 00:04:17.548 END TEST even_2G_alloc 00:04:17.548 ************************************ 00:04:17.548 08:17:09 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:17.548 08:17:09 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:17.548 08:17:09 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:17.548 08:17:09 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:17.548 08:17:09 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:17.548 ************************************ 00:04:17.548 START TEST odd_alloc 00:04:17.548 ************************************ 00:04:17.548 08:17:09 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:04:17.548 08:17:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:17.548 08:17:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:17.548 08:17:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:17.548 08:17:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:17.548 08:17:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:17.548 08:17:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:17.548 08:17:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:17.548 08:17:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:17.548 08:17:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:17.548 08:17:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:17.548 08:17:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:17.548 08:17:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:17.548 08:17:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:17.548 08:17:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:17.548 08:17:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:17.548 08:17:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:04:17.548 08:17:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:17.548 08:17:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:17.548 08:17:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:17.548 08:17:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:17.548 08:17:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:17.548 08:17:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:17.548 08:17:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:17.548 08:17:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:17.849 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:18.110 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:18.110 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:18.110 08:17:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:18.110 08:17:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:18.110 08:17:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:18.110 08:17:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:18.110 08:17:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:18.110 08:17:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:18.110 08:17:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:18.110 08:17:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:18.110 08:17:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:18.110 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:18.110 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:18.110 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:18.110 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.110 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.110 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:18.111 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:18.111 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.111 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.111 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.111 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.111 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8111908 kB' 'MemAvailable: 9493136 kB' 'Buffers: 2436 kB' 'Cached: 1595588 kB' 'SwapCached: 0 kB' 'Active: 452912 kB' 'Inactive: 1266716 kB' 'Active(anon): 132068 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1266716 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 123472 kB' 'Mapped: 48556 kB' 'Shmem: 10464 kB' 'KReclaimable: 61212 kB' 'Slab: 132276 kB' 'SReclaimable: 61212 kB' 'SUnreclaim: 71064 kB' 'KernelStack: 6276 kB' 'PageTables: 4400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 354408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 6115328 kB' 'DirectMap1G: 8388608 kB' 00:04:18.111 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.111 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.111 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.111 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.111 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.111 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.111 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.111 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.111 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.111 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.111 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.111 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.111 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.111 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.111 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.111 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.111 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.111 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.111 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.111 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.111 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.111 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.111 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.111 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.111 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.111 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.111 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.111 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.111 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.111 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.111 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.111 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.111 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.111 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.111 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.111 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.111 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.111 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.111 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.111 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.111 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.111 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.111 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.111 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.111 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.111 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.111 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.111 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.111 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.111 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.111 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.111 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.111 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.111 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.111 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.111 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.111 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.111 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.111 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.111 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.111 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.111 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.111 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.111 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.111 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.111 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.111 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.111 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.111 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.111 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.111 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.111 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.112 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.112 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.112 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.112 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.112 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.112 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.112 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.112 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.112 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.112 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.112 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.112 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.112 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.112 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.112 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.112 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.112 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.112 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.112 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.112 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.112 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.112 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.112 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.112 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.112 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.112 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.112 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.112 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.112 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.112 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.112 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.112 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.112 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.112 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.112 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.112 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.112 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.112 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.112 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.112 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.112 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.112 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.112 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.112 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.112 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.112 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.112 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.112 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.112 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.112 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.112 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.112 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.112 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.112 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.112 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.112 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.112 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.112 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.112 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.112 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.112 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.112 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.112 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.112 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.112 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.112 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.112 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.112 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.112 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.112 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.112 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.112 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.112 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.112 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.112 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.112 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.112 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.112 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.112 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.112 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.112 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.112 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.112 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.112 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.112 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.112 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.112 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.112 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.112 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.112 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:18.112 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:18.112 08:17:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:18.112 08:17:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:18.112 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:18.112 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8111908 kB' 'MemAvailable: 9493136 kB' 'Buffers: 2436 kB' 'Cached: 1595588 kB' 'SwapCached: 0 kB' 'Active: 452916 kB' 'Inactive: 1266716 kB' 'Active(anon): 132072 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1266716 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 123204 kB' 'Mapped: 48488 kB' 'Shmem: 10464 kB' 'KReclaimable: 61212 kB' 'Slab: 132268 kB' 'SReclaimable: 61212 kB' 'SUnreclaim: 71056 kB' 'KernelStack: 6256 kB' 'PageTables: 4304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 354408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 6115328 kB' 'DirectMap1G: 8388608 kB' 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.113 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.114 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8111908 kB' 'MemAvailable: 9493136 kB' 'Buffers: 2436 kB' 'Cached: 1595588 kB' 'SwapCached: 0 kB' 'Active: 452648 kB' 'Inactive: 1266716 kB' 'Active(anon): 131804 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1266716 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 123188 kB' 'Mapped: 48488 kB' 'Shmem: 10464 kB' 'KReclaimable: 61212 kB' 'Slab: 132268 kB' 'SReclaimable: 61212 kB' 'SUnreclaim: 71056 kB' 'KernelStack: 6256 kB' 'PageTables: 4304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 354408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 6115328 kB' 'DirectMap1G: 8388608 kB' 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.115 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:18.116 nr_hugepages=1025 00:04:18.116 resv_hugepages=0 00:04:18.116 surplus_hugepages=0 00:04:18.116 anon_hugepages=0 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.116 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8111908 kB' 'MemAvailable: 9493136 kB' 'Buffers: 2436 kB' 'Cached: 1595588 kB' 'SwapCached: 0 kB' 'Active: 452740 kB' 'Inactive: 1266716 kB' 'Active(anon): 131896 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1266716 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 123288 kB' 'Mapped: 48488 kB' 'Shmem: 10464 kB' 'KReclaimable: 61212 kB' 'Slab: 132272 kB' 'SReclaimable: 61212 kB' 'SUnreclaim: 71060 kB' 'KernelStack: 6272 kB' 'PageTables: 4352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 354408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 6115328 kB' 'DirectMap1G: 8388608 kB' 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.117 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8111656 kB' 'MemUsed: 4130320 kB' 'SwapCached: 0 kB' 'Active: 452660 kB' 'Inactive: 1266716 kB' 'Active(anon): 131816 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1266716 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'FilePages: 1598024 kB' 'Mapped: 48488 kB' 'AnonPages: 123212 kB' 'Shmem: 10464 kB' 'KernelStack: 6256 kB' 'PageTables: 4304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61212 kB' 'Slab: 132256 kB' 'SReclaimable: 61212 kB' 'SUnreclaim: 71044 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.118 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:18.119 node0=1025 expecting 1025 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:04:18.119 00:04:18.119 real 0m0.554s 00:04:18.119 user 0m0.256s 00:04:18.119 sys 0m0.298s 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:18.119 08:17:10 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:18.119 ************************************ 00:04:18.119 END TEST odd_alloc 00:04:18.120 ************************************ 00:04:18.378 08:17:10 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:18.378 08:17:10 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:18.378 08:17:10 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:18.378 08:17:10 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:18.378 08:17:10 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:18.378 ************************************ 00:04:18.378 START TEST custom_alloc 00:04:18.378 ************************************ 00:04:18.378 08:17:10 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:04:18.378 08:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:18.378 08:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:18.378 08:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:18.378 08:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:18.378 08:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:18.378 08:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:18.378 08:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:18.378 08:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:18.378 08:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:18.378 08:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:18.378 08:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:18.378 08:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:18.378 08:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:18.378 08:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:18.378 08:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:18.378 08:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:18.378 08:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:18.378 08:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:18.378 08:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:18.378 08:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:18.378 08:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:18.378 08:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:18.378 08:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:18.378 08:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:18.378 08:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:18.378 08:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:04:18.378 08:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:18.379 08:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:18.379 08:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:18.379 08:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:18.379 08:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:18.379 08:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:18.379 08:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:18.379 08:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:18.379 08:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:18.379 08:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:18.379 08:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:18.379 08:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:18.379 08:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:18.379 08:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:18.379 08:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:18.379 08:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:04:18.379 08:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:18.379 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:18.379 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:18.641 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:18.641 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:18.641 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:18.641 08:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:04:18.641 08:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:18.641 08:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:18.641 08:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:18.641 08:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:18.641 08:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:18.641 08:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:18.641 08:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:18.641 08:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:18.641 08:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:18.641 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:18.641 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:18.641 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:18.641 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.641 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.641 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:18.641 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:18.641 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.641 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.641 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9158272 kB' 'MemAvailable: 10539500 kB' 'Buffers: 2436 kB' 'Cached: 1595588 kB' 'SwapCached: 0 kB' 'Active: 453348 kB' 'Inactive: 1266716 kB' 'Active(anon): 132504 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1266716 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 123668 kB' 'Mapped: 48792 kB' 'Shmem: 10464 kB' 'KReclaimable: 61212 kB' 'Slab: 132212 kB' 'SReclaimable: 61212 kB' 'SUnreclaim: 71000 kB' 'KernelStack: 6232 kB' 'PageTables: 4340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 354408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 6115328 kB' 'DirectMap1G: 8388608 kB' 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.642 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9158272 kB' 'MemAvailable: 10539500 kB' 'Buffers: 2436 kB' 'Cached: 1595588 kB' 'SwapCached: 0 kB' 'Active: 452652 kB' 'Inactive: 1266716 kB' 'Active(anon): 131808 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1266716 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 123292 kB' 'Mapped: 48692 kB' 'Shmem: 10464 kB' 'KReclaimable: 61212 kB' 'Slab: 132236 kB' 'SReclaimable: 61212 kB' 'SUnreclaim: 71024 kB' 'KernelStack: 6260 kB' 'PageTables: 4392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 354408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 6115328 kB' 'DirectMap1G: 8388608 kB' 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.643 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.644 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9158272 kB' 'MemAvailable: 10539500 kB' 'Buffers: 2436 kB' 'Cached: 1595588 kB' 'SwapCached: 0 kB' 'Active: 452748 kB' 'Inactive: 1266716 kB' 'Active(anon): 131904 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1266716 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 123312 kB' 'Mapped: 48692 kB' 'Shmem: 10464 kB' 'KReclaimable: 61212 kB' 'Slab: 132236 kB' 'SReclaimable: 61212 kB' 'SUnreclaim: 71024 kB' 'KernelStack: 6244 kB' 'PageTables: 4344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 354408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 6115328 kB' 'DirectMap1G: 8388608 kB' 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.645 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.646 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.646 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.646 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.646 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.646 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.646 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.646 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.646 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.646 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.646 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.646 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.646 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.646 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.646 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.646 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.646 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.646 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.646 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.646 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.646 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.646 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.646 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.646 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.646 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.646 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.646 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.646 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.646 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.646 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.646 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.646 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.646 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.646 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.646 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.646 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.646 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.646 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.646 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.646 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.646 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.646 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.646 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.646 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.646 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.646 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.646 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.646 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.646 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.646 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.646 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.646 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.646 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.646 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.646 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.646 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.646 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.646 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.646 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.646 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.646 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.646 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.646 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.646 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.646 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.646 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.646 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.646 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.646 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.646 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.646 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.646 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.646 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.646 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.646 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.646 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.646 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.646 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.646 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.646 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.646 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.646 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.646 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.646 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.646 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:18.647 nr_hugepages=512 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:18.647 resv_hugepages=0 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:18.647 surplus_hugepages=0 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:18.647 anon_hugepages=0 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9159312 kB' 'MemAvailable: 10540540 kB' 'Buffers: 2436 kB' 'Cached: 1595588 kB' 'SwapCached: 0 kB' 'Active: 452704 kB' 'Inactive: 1266716 kB' 'Active(anon): 131860 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1266716 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 123184 kB' 'Mapped: 48692 kB' 'Shmem: 10464 kB' 'KReclaimable: 61212 kB' 'Slab: 132232 kB' 'SReclaimable: 61212 kB' 'SUnreclaim: 71020 kB' 'KernelStack: 6260 kB' 'PageTables: 4368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 354408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 6115328 kB' 'DirectMap1G: 8388608 kB' 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.647 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.648 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.648 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.648 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.648 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.648 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.648 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.648 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.648 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.648 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.648 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.648 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.648 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.648 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.648 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.648 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.648 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.648 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.648 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.648 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.648 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.648 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.648 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.648 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.648 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.648 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.648 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.648 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.648 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.648 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.648 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.648 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.648 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.648 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.648 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.648 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.648 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.648 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.648 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.648 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.648 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.648 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.648 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.648 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.648 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.648 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.648 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.648 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.648 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.648 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.648 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.648 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.648 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.648 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.648 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.648 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.648 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.648 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.648 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.648 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.648 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.648 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.648 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.648 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.648 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.648 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.946 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.946 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.946 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.946 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.946 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.946 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.946 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.946 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.946 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.946 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.946 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.946 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.946 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.946 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.946 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.946 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.946 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.946 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.946 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.946 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.946 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.946 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.946 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.946 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.946 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.946 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.946 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.946 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.946 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.946 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.946 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.946 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.946 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.946 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9159312 kB' 'MemUsed: 3082664 kB' 'SwapCached: 0 kB' 'Active: 452728 kB' 'Inactive: 1266716 kB' 'Active(anon): 131884 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1266716 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'FilePages: 1598024 kB' 'Mapped: 48692 kB' 'AnonPages: 123212 kB' 'Shmem: 10464 kB' 'KernelStack: 6260 kB' 'PageTables: 4368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61212 kB' 'Slab: 132228 kB' 'SReclaimable: 61212 kB' 'SUnreclaim: 71016 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:18.947 node0=512 expecting 512 00:04:18.947 ************************************ 00:04:18.947 END TEST custom_alloc 00:04:18.947 ************************************ 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:18.947 00:04:18.947 real 0m0.550s 00:04:18.947 user 0m0.247s 00:04:18.947 sys 0m0.306s 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:18.947 08:17:10 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:18.947 08:17:10 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:18.947 08:17:10 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:18.948 08:17:10 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:18.948 08:17:10 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:18.948 08:17:10 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:18.948 ************************************ 00:04:18.948 START TEST no_shrink_alloc 00:04:18.948 ************************************ 00:04:18.948 08:17:10 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:04:18.948 08:17:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:18.948 08:17:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:18.948 08:17:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:18.948 08:17:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:18.948 08:17:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:18.948 08:17:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:18.948 08:17:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:18.948 08:17:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:18.948 08:17:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:18.948 08:17:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:18.948 08:17:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:18.948 08:17:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:18.948 08:17:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:18.948 08:17:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:18.948 08:17:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:18.948 08:17:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:18.948 08:17:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:18.948 08:17:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:18.948 08:17:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:18.948 08:17:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:18.948 08:17:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:18.948 08:17:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:19.208 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:19.208 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:19.208 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:19.208 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:19.208 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:19.208 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:19.208 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:19.208 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:19.208 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:19.208 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:19.208 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:19.208 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:19.208 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:19.208 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:19.208 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:19.208 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.208 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.208 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.208 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.208 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.208 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.208 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.208 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8105860 kB' 'MemAvailable: 9487088 kB' 'Buffers: 2436 kB' 'Cached: 1595588 kB' 'SwapCached: 0 kB' 'Active: 453088 kB' 'Inactive: 1266716 kB' 'Active(anon): 132244 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1266716 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 123640 kB' 'Mapped: 48728 kB' 'Shmem: 10464 kB' 'KReclaimable: 61212 kB' 'Slab: 132236 kB' 'SReclaimable: 61212 kB' 'SUnreclaim: 71024 kB' 'KernelStack: 6228 kB' 'PageTables: 4288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 6115328 kB' 'DirectMap1G: 8388608 kB' 00:04:19.208 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.208 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.208 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.208 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.208 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.208 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.208 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.208 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.208 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.208 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.208 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.208 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.208 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.208 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.208 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.208 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.208 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.208 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.208 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.208 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.208 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.208 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.208 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.208 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.208 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.208 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.208 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.208 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.208 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.208 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.208 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.208 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.208 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.208 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.208 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.208 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.208 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.208 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.208 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.208 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.208 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.208 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.208 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.208 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.208 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.209 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8105608 kB' 'MemAvailable: 9486836 kB' 'Buffers: 2436 kB' 'Cached: 1595588 kB' 'SwapCached: 0 kB' 'Active: 452940 kB' 'Inactive: 1266716 kB' 'Active(anon): 132096 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1266716 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 123264 kB' 'Mapped: 48604 kB' 'Shmem: 10464 kB' 'KReclaimable: 61212 kB' 'Slab: 132236 kB' 'SReclaimable: 61212 kB' 'SUnreclaim: 71024 kB' 'KernelStack: 6256 kB' 'PageTables: 4284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54580 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 6115328 kB' 'DirectMap1G: 8388608 kB' 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.210 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8105608 kB' 'MemAvailable: 9486836 kB' 'Buffers: 2436 kB' 'Cached: 1595588 kB' 'SwapCached: 0 kB' 'Active: 452700 kB' 'Inactive: 1266716 kB' 'Active(anon): 131856 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1266716 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 122996 kB' 'Mapped: 48604 kB' 'Shmem: 10464 kB' 'KReclaimable: 61212 kB' 'Slab: 132236 kB' 'SReclaimable: 61212 kB' 'SUnreclaim: 71024 kB' 'KernelStack: 6256 kB' 'PageTables: 4284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54580 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 6115328 kB' 'DirectMap1G: 8388608 kB' 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.211 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.212 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:19.213 nr_hugepages=1024 00:04:19.213 resv_hugepages=0 00:04:19.213 surplus_hugepages=0 00:04:19.213 anon_hugepages=0 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.213 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8107688 kB' 'MemAvailable: 9488916 kB' 'Buffers: 2436 kB' 'Cached: 1595588 kB' 'SwapCached: 0 kB' 'Active: 448624 kB' 'Inactive: 1266716 kB' 'Active(anon): 127780 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1266716 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 119004 kB' 'Mapped: 48084 kB' 'Shmem: 10464 kB' 'KReclaimable: 61212 kB' 'Slab: 132236 kB' 'SReclaimable: 61212 kB' 'SUnreclaim: 71024 kB' 'KernelStack: 6272 kB' 'PageTables: 4332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 337552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 6115328 kB' 'DirectMap1G: 8388608 kB' 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.474 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.475 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8114288 kB' 'MemUsed: 4127688 kB' 'SwapCached: 0 kB' 'Active: 447676 kB' 'Inactive: 1266716 kB' 'Active(anon): 126832 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1266716 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'FilePages: 1598024 kB' 'Mapped: 47864 kB' 'AnonPages: 118260 kB' 'Shmem: 10464 kB' 'KernelStack: 6144 kB' 'PageTables: 3784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61208 kB' 'Slab: 132092 kB' 'SReclaimable: 61208 kB' 'SUnreclaim: 70884 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.476 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.477 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.477 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.477 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.477 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.477 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.477 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.477 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.477 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.477 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.477 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.477 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.477 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.477 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.477 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.477 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.477 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.477 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.477 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.477 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.477 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.477 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:19.477 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:19.477 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:19.477 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:19.477 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:19.477 node0=1024 expecting 1024 00:04:19.477 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:19.477 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:19.477 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:19.477 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:19.477 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:19.477 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:19.477 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:19.477 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:19.737 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:19.737 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:19.737 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:19.737 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:19.737 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:19.737 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:19.737 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:19.737 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:19.737 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:19.737 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:19.737 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8111020 kB' 'MemAvailable: 9492244 kB' 'Buffers: 2436 kB' 'Cached: 1595588 kB' 'SwapCached: 0 kB' 'Active: 448632 kB' 'Inactive: 1266716 kB' 'Active(anon): 127788 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1266716 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 118892 kB' 'Mapped: 48000 kB' 'Shmem: 10464 kB' 'KReclaimable: 61208 kB' 'Slab: 132040 kB' 'SReclaimable: 61208 kB' 'SUnreclaim: 70832 kB' 'KernelStack: 6164 kB' 'PageTables: 3912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336180 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54532 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 6115328 kB' 'DirectMap1G: 8388608 kB' 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.738 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8110768 kB' 'MemAvailable: 9491992 kB' 'Buffers: 2436 kB' 'Cached: 1595588 kB' 'SwapCached: 0 kB' 'Active: 447988 kB' 'Inactive: 1266716 kB' 'Active(anon): 127144 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1266716 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 118280 kB' 'Mapped: 47868 kB' 'Shmem: 10464 kB' 'KReclaimable: 61208 kB' 'Slab: 132032 kB' 'SReclaimable: 61208 kB' 'SUnreclaim: 70824 kB' 'KernelStack: 6144 kB' 'PageTables: 3784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336180 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54500 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 6115328 kB' 'DirectMap1G: 8388608 kB' 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.739 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.740 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8110768 kB' 'MemAvailable: 9491992 kB' 'Buffers: 2436 kB' 'Cached: 1595588 kB' 'SwapCached: 0 kB' 'Active: 448000 kB' 'Inactive: 1266716 kB' 'Active(anon): 127156 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1266716 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 118288 kB' 'Mapped: 47868 kB' 'Shmem: 10464 kB' 'KReclaimable: 61208 kB' 'Slab: 132032 kB' 'SReclaimable: 61208 kB' 'SUnreclaim: 70824 kB' 'KernelStack: 6160 kB' 'PageTables: 3828 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336180 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54468 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 6115328 kB' 'DirectMap1G: 8388608 kB' 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.741 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.742 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.742 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.742 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.742 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.742 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.742 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.742 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.742 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.742 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.742 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.742 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.742 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.742 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.742 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.742 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.742 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.742 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.742 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.742 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.742 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.742 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.742 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.742 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.742 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.742 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.742 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.742 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.742 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.742 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.742 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.742 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.742 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.742 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.742 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.742 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.742 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.742 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.742 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.742 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.742 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.742 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.742 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.742 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.742 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.742 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.742 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.742 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.742 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.742 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.742 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.742 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.742 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.742 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.742 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.742 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.742 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.742 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.742 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.742 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.742 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.742 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.742 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.742 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.742 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.742 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.742 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.742 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.742 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.742 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.742 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.742 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.742 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.742 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.742 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.742 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.742 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.742 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.742 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.742 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.742 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.742 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.742 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.742 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.742 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.742 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.742 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.742 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.742 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.742 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.742 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.743 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.743 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.743 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.743 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.743 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.743 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.743 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.743 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.743 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.743 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.743 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.743 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.743 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.743 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.743 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.743 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.743 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.743 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.743 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.743 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.743 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.743 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.743 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.743 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.743 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.743 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.743 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.743 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.743 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.743 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.743 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.743 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.743 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.743 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.743 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.743 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.743 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.743 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.743 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.743 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.743 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.743 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.743 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.743 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.743 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.743 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.743 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.743 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.743 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.743 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.743 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.743 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.743 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.743 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.743 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.743 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.743 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.743 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.743 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.743 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.743 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.743 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.743 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.743 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.743 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.743 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.743 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.743 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.743 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.743 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.743 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.743 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.743 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.743 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.743 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.743 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.743 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.743 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.743 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:19.743 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:20.002 nr_hugepages=1024 00:04:20.002 resv_hugepages=0 00:04:20.002 surplus_hugepages=0 00:04:20.002 anon_hugepages=0 00:04:20.002 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:20.002 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:20.002 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:20.002 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:20.002 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:20.002 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:20.002 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:20.002 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:20.002 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:20.002 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8110768 kB' 'MemAvailable: 9491992 kB' 'Buffers: 2436 kB' 'Cached: 1595588 kB' 'SwapCached: 0 kB' 'Active: 447692 kB' 'Inactive: 1266716 kB' 'Active(anon): 126848 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1266716 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 118012 kB' 'Mapped: 47868 kB' 'Shmem: 10464 kB' 'KReclaimable: 61208 kB' 'Slab: 132032 kB' 'SReclaimable: 61208 kB' 'SUnreclaim: 70824 kB' 'KernelStack: 6144 kB' 'PageTables: 3784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336180 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54484 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 6115328 kB' 'DirectMap1G: 8388608 kB' 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.003 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8110768 kB' 'MemUsed: 4131208 kB' 'SwapCached: 0 kB' 'Active: 448360 kB' 'Inactive: 1266716 kB' 'Active(anon): 127516 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1266716 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'FilePages: 1598024 kB' 'Mapped: 47868 kB' 'AnonPages: 118664 kB' 'Shmem: 10464 kB' 'KernelStack: 6160 kB' 'PageTables: 3840 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61208 kB' 'Slab: 132032 kB' 'SReclaimable: 61208 kB' 'SUnreclaim: 70824 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.004 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.005 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.006 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.006 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.006 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.006 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.006 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.006 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.006 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.006 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:20.006 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:20.006 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:20.006 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:20.006 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:20.006 node0=1024 expecting 1024 00:04:20.006 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:20.006 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:20.006 08:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:20.006 00:04:20.006 real 0m1.077s 00:04:20.006 user 0m0.525s 00:04:20.006 sys 0m0.550s 00:04:20.006 08:17:11 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:20.006 08:17:11 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:20.006 ************************************ 00:04:20.006 END TEST no_shrink_alloc 00:04:20.006 ************************************ 00:04:20.006 08:17:12 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:20.006 08:17:12 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:20.006 08:17:12 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:20.006 08:17:12 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:20.006 08:17:12 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:20.006 08:17:12 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:20.006 08:17:12 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:20.006 08:17:12 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:20.006 08:17:12 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:20.006 08:17:12 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:20.006 00:04:20.006 real 0m4.787s 00:04:20.006 user 0m2.203s 00:04:20.006 sys 0m2.509s 00:04:20.006 08:17:12 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:20.006 08:17:12 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:20.006 ************************************ 00:04:20.006 END TEST hugepages 00:04:20.006 ************************************ 00:04:20.006 08:17:12 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:20.006 08:17:12 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:20.006 08:17:12 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:20.006 08:17:12 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:20.006 08:17:12 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:20.006 ************************************ 00:04:20.006 START TEST driver 00:04:20.006 ************************************ 00:04:20.006 08:17:12 setup.sh.driver -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:20.006 * Looking for test storage... 00:04:20.006 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:20.006 08:17:12 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:20.006 08:17:12 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:20.006 08:17:12 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:20.571 08:17:12 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:20.571 08:17:12 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:20.571 08:17:12 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:20.571 08:17:12 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:20.571 ************************************ 00:04:20.571 START TEST guess_driver 00:04:20.571 ************************************ 00:04:20.571 08:17:12 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:04:20.571 08:17:12 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:20.571 08:17:12 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:20.571 08:17:12 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:20.571 08:17:12 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:20.571 08:17:12 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:20.571 08:17:12 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:20.571 08:17:12 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:20.571 08:17:12 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:20.571 08:17:12 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:04:20.571 08:17:12 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:04:20.571 08:17:12 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:04:20.571 08:17:12 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:04:20.571 08:17:12 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:04:20.571 08:17:12 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:04:20.571 08:17:12 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:04:20.571 08:17:12 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:04:20.571 08:17:12 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:04:20.571 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:04:20.571 08:17:12 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:04:20.571 08:17:12 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:04:20.571 08:17:12 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:20.571 Looking for driver=uio_pci_generic 00:04:20.571 08:17:12 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:04:20.571 08:17:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:20.571 08:17:12 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:20.571 08:17:12 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:20.571 08:17:12 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:21.505 08:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:04:21.505 08:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:04:21.505 08:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:21.505 08:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:21.505 08:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:21.505 08:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:21.505 08:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:21.505 08:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:21.505 08:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:21.505 08:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:21.505 08:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:21.505 08:17:13 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:21.505 08:17:13 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:22.071 00:04:22.071 real 0m1.427s 00:04:22.071 user 0m0.546s 00:04:22.071 sys 0m0.869s 00:04:22.071 08:17:14 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:22.071 08:17:14 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:22.071 ************************************ 00:04:22.071 END TEST guess_driver 00:04:22.071 ************************************ 00:04:22.071 08:17:14 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:04:22.071 00:04:22.071 real 0m2.080s 00:04:22.071 user 0m0.756s 00:04:22.071 sys 0m1.368s 00:04:22.071 08:17:14 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:22.071 08:17:14 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:22.071 ************************************ 00:04:22.071 END TEST driver 00:04:22.071 ************************************ 00:04:22.071 08:17:14 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:22.071 08:17:14 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:22.071 08:17:14 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:22.071 08:17:14 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:22.071 08:17:14 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:22.071 ************************************ 00:04:22.071 START TEST devices 00:04:22.071 ************************************ 00:04:22.071 08:17:14 setup.sh.devices -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:22.329 * Looking for test storage... 00:04:22.329 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:22.329 08:17:14 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:22.329 08:17:14 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:22.329 08:17:14 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:22.329 08:17:14 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:22.894 08:17:15 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:22.894 08:17:15 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:22.894 08:17:15 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:22.894 08:17:15 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:22.894 08:17:15 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:22.894 08:17:15 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:22.894 08:17:15 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:22.894 08:17:15 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:22.894 08:17:15 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:22.894 08:17:15 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:22.894 08:17:15 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n2 00:04:22.894 08:17:15 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:04:22.894 08:17:15 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:04:22.894 08:17:15 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:22.894 08:17:15 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:22.894 08:17:15 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n3 00:04:22.894 08:17:15 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:04:22.894 08:17:15 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:04:22.894 08:17:15 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:22.894 08:17:15 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:22.894 08:17:15 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:04:22.894 08:17:15 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:04:22.894 08:17:15 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:22.894 08:17:15 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:22.894 08:17:15 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:22.894 08:17:15 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:22.894 08:17:15 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:22.894 08:17:15 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:22.894 08:17:15 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:22.894 08:17:15 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:22.894 08:17:15 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:22.894 08:17:15 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:22.894 08:17:15 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:22.894 08:17:15 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:22.894 08:17:15 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:22.894 08:17:15 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:22.894 08:17:15 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:04:23.152 No valid GPT data, bailing 00:04:23.152 08:17:15 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:23.152 08:17:15 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:23.152 08:17:15 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:23.152 08:17:15 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:23.152 08:17:15 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:23.152 08:17:15 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:23.152 08:17:15 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:23.152 08:17:15 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:23.152 08:17:15 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:23.152 08:17:15 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:23.153 08:17:15 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:23.153 08:17:15 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:04:23.153 08:17:15 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:23.153 08:17:15 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:23.153 08:17:15 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:23.153 08:17:15 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:04:23.153 08:17:15 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:04:23.153 08:17:15 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:04:23.153 No valid GPT data, bailing 00:04:23.153 08:17:15 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:04:23.153 08:17:15 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:23.153 08:17:15 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:23.153 08:17:15 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:04:23.153 08:17:15 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n2 00:04:23.153 08:17:15 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:04:23.153 08:17:15 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:23.153 08:17:15 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:23.153 08:17:15 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:23.153 08:17:15 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:23.153 08:17:15 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:23.153 08:17:15 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:04:23.153 08:17:15 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:23.153 08:17:15 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:23.153 08:17:15 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:23.153 08:17:15 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:04:23.153 08:17:15 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:04:23.153 08:17:15 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:04:23.153 No valid GPT data, bailing 00:04:23.153 08:17:15 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:04:23.153 08:17:15 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:23.153 08:17:15 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:23.153 08:17:15 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:04:23.153 08:17:15 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n3 00:04:23.153 08:17:15 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:04:23.153 08:17:15 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:23.153 08:17:15 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:23.153 08:17:15 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:23.153 08:17:15 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:23.153 08:17:15 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:23.153 08:17:15 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:04:23.153 08:17:15 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:23.153 08:17:15 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:04:23.153 08:17:15 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:04:23.153 08:17:15 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:04:23.153 08:17:15 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:04:23.153 08:17:15 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:04:23.153 No valid GPT data, bailing 00:04:23.153 08:17:15 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:23.410 08:17:15 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:23.410 08:17:15 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:23.410 08:17:15 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:04:23.410 08:17:15 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:04:23.410 08:17:15 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:04:23.410 08:17:15 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:04:23.410 08:17:15 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:04:23.410 08:17:15 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:23.410 08:17:15 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:04:23.410 08:17:15 setup.sh.devices -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:04:23.410 08:17:15 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:23.410 08:17:15 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:23.410 08:17:15 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:23.410 08:17:15 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:23.410 08:17:15 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:23.410 ************************************ 00:04:23.410 START TEST nvme_mount 00:04:23.410 ************************************ 00:04:23.410 08:17:15 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:04:23.410 08:17:15 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:23.410 08:17:15 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:23.410 08:17:15 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:23.410 08:17:15 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:23.410 08:17:15 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:23.410 08:17:15 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:23.410 08:17:15 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:23.410 08:17:15 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:23.410 08:17:15 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:23.410 08:17:15 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:23.410 08:17:15 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:23.410 08:17:15 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:23.410 08:17:15 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:23.410 08:17:15 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:23.410 08:17:15 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:23.410 08:17:15 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:23.410 08:17:15 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:23.410 08:17:15 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:23.410 08:17:15 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:24.344 Creating new GPT entries in memory. 00:04:24.344 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:24.344 other utilities. 00:04:24.344 08:17:16 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:24.344 08:17:16 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:24.344 08:17:16 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:24.344 08:17:16 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:24.344 08:17:16 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:25.301 Creating new GPT entries in memory. 00:04:25.301 The operation has completed successfully. 00:04:25.301 08:17:17 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:25.301 08:17:17 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:25.301 08:17:17 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 57054 00:04:25.301 08:17:17 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:25.301 08:17:17 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:04:25.301 08:17:17 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:25.301 08:17:17 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:25.301 08:17:17 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:25.301 08:17:17 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:25.301 08:17:17 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:25.301 08:17:17 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:25.301 08:17:17 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:25.301 08:17:17 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:25.301 08:17:17 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:25.301 08:17:17 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:25.301 08:17:17 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:25.301 08:17:17 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:25.301 08:17:17 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:25.301 08:17:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.301 08:17:17 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:25.301 08:17:17 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:25.301 08:17:17 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:25.301 08:17:17 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:25.559 08:17:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:25.559 08:17:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:25.559 08:17:17 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:25.559 08:17:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.559 08:17:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:25.559 08:17:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.817 08:17:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:25.817 08:17:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.817 08:17:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:25.817 08:17:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.817 08:17:17 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:25.817 08:17:17 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:25.817 08:17:17 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:25.817 08:17:17 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:25.817 08:17:17 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:25.817 08:17:17 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:25.817 08:17:17 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:25.817 08:17:17 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:25.817 08:17:17 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:25.817 08:17:17 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:25.817 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:25.817 08:17:17 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:25.817 08:17:17 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:26.076 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:26.076 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:26.076 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:26.076 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:26.076 08:17:18 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:04:26.076 08:17:18 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:04:26.076 08:17:18 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:26.076 08:17:18 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:26.076 08:17:18 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:26.076 08:17:18 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:26.076 08:17:18 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:26.076 08:17:18 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:26.076 08:17:18 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:26.076 08:17:18 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:26.076 08:17:18 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:26.076 08:17:18 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:26.076 08:17:18 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:26.076 08:17:18 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:26.335 08:17:18 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:26.335 08:17:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.335 08:17:18 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:26.335 08:17:18 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:26.335 08:17:18 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:26.335 08:17:18 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:26.335 08:17:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:26.335 08:17:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:26.335 08:17:18 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:26.335 08:17:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.335 08:17:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:26.335 08:17:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.594 08:17:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:26.594 08:17:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.594 08:17:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:26.594 08:17:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.594 08:17:18 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:26.594 08:17:18 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:26.594 08:17:18 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:26.594 08:17:18 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:26.594 08:17:18 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:26.594 08:17:18 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:26.594 08:17:18 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:04:26.594 08:17:18 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:26.594 08:17:18 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:26.594 08:17:18 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:26.594 08:17:18 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:26.594 08:17:18 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:26.594 08:17:18 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:26.594 08:17:18 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:26.594 08:17:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.594 08:17:18 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:26.594 08:17:18 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:26.594 08:17:18 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:26.594 08:17:18 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:27.159 08:17:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:27.159 08:17:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:27.159 08:17:19 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:27.160 08:17:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.160 08:17:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:27.160 08:17:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.160 08:17:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:27.160 08:17:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.160 08:17:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:27.160 08:17:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.160 08:17:19 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:27.160 08:17:19 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:27.160 08:17:19 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:27.160 08:17:19 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:27.160 08:17:19 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:27.160 08:17:19 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:27.160 08:17:19 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:27.160 08:17:19 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:27.160 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:27.160 ************************************ 00:04:27.160 END TEST nvme_mount 00:04:27.160 ************************************ 00:04:27.160 00:04:27.160 real 0m3.969s 00:04:27.160 user 0m0.684s 00:04:27.160 sys 0m1.029s 00:04:27.160 08:17:19 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:27.160 08:17:19 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:27.418 08:17:19 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:27.418 08:17:19 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:27.418 08:17:19 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:27.418 08:17:19 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:27.418 08:17:19 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:27.418 ************************************ 00:04:27.418 START TEST dm_mount 00:04:27.418 ************************************ 00:04:27.418 08:17:19 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:04:27.418 08:17:19 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:27.418 08:17:19 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:27.418 08:17:19 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:27.418 08:17:19 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:27.418 08:17:19 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:27.418 08:17:19 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:27.418 08:17:19 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:27.418 08:17:19 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:27.418 08:17:19 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:27.418 08:17:19 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:27.418 08:17:19 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:27.418 08:17:19 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:27.418 08:17:19 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:27.418 08:17:19 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:27.418 08:17:19 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:27.418 08:17:19 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:27.418 08:17:19 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:27.418 08:17:19 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:27.418 08:17:19 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:27.418 08:17:19 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:27.418 08:17:19 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:28.364 Creating new GPT entries in memory. 00:04:28.364 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:28.364 other utilities. 00:04:28.364 08:17:20 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:28.364 08:17:20 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:28.364 08:17:20 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:28.364 08:17:20 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:28.364 08:17:20 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:29.298 Creating new GPT entries in memory. 00:04:29.298 The operation has completed successfully. 00:04:29.298 08:17:21 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:29.298 08:17:21 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:29.298 08:17:21 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:29.298 08:17:21 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:29.298 08:17:21 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:04:30.672 The operation has completed successfully. 00:04:30.672 08:17:22 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:30.672 08:17:22 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:30.672 08:17:22 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 57488 00:04:30.672 08:17:22 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:30.672 08:17:22 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:30.672 08:17:22 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:30.672 08:17:22 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:30.672 08:17:22 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:30.672 08:17:22 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:30.672 08:17:22 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:30.672 08:17:22 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:30.672 08:17:22 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:30.672 08:17:22 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:30.672 08:17:22 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:30.672 08:17:22 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:30.672 08:17:22 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:30.672 08:17:22 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:30.672 08:17:22 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:04:30.672 08:17:22 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:30.672 08:17:22 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:30.673 08:17:22 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:30.673 08:17:22 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:30.673 08:17:22 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:30.673 08:17:22 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:30.673 08:17:22 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:30.673 08:17:22 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:30.673 08:17:22 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:30.673 08:17:22 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:30.673 08:17:22 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:30.673 08:17:22 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:30.673 08:17:22 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:30.673 08:17:22 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.673 08:17:22 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:30.673 08:17:22 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:30.673 08:17:22 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:30.673 08:17:22 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:30.673 08:17:22 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:30.673 08:17:22 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:30.673 08:17:22 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:30.673 08:17:22 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.673 08:17:22 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:30.673 08:17:22 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.931 08:17:22 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:30.931 08:17:22 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.931 08:17:22 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:30.931 08:17:22 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.931 08:17:22 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:30.931 08:17:22 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:04:30.931 08:17:22 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:30.931 08:17:22 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:30.931 08:17:22 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:30.931 08:17:22 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:30.931 08:17:22 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:30.931 08:17:22 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:30.931 08:17:22 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:30.931 08:17:22 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:30.931 08:17:22 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:30.931 08:17:22 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:30.931 08:17:23 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:30.931 08:17:23 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:30.931 08:17:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.931 08:17:23 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:30.931 08:17:23 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:30.931 08:17:23 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:30.931 08:17:23 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:31.192 08:17:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:31.192 08:17:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:31.192 08:17:23 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:31.192 08:17:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.192 08:17:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:31.192 08:17:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.192 08:17:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:31.192 08:17:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.456 08:17:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:31.456 08:17:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.456 08:17:23 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:31.457 08:17:23 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:31.457 08:17:23 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:31.457 08:17:23 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:31.457 08:17:23 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:31.457 08:17:23 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:31.457 08:17:23 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:31.457 08:17:23 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:31.457 08:17:23 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:31.457 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:31.457 08:17:23 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:31.457 08:17:23 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:31.457 00:04:31.457 real 0m4.178s 00:04:31.457 user 0m0.461s 00:04:31.457 sys 0m0.676s 00:04:31.457 08:17:23 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:31.457 ************************************ 00:04:31.457 08:17:23 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:31.457 END TEST dm_mount 00:04:31.457 ************************************ 00:04:31.457 08:17:23 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:31.457 08:17:23 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:31.457 08:17:23 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:31.457 08:17:23 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:31.457 08:17:23 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:31.457 08:17:23 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:31.457 08:17:23 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:31.457 08:17:23 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:31.714 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:31.714 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:31.714 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:31.714 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:31.714 08:17:23 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:31.714 08:17:23 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:31.714 08:17:23 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:31.714 08:17:23 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:31.714 08:17:23 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:31.714 08:17:23 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:31.714 08:17:23 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:31.714 00:04:31.714 real 0m9.669s 00:04:31.714 user 0m1.813s 00:04:31.714 sys 0m2.277s 00:04:31.714 08:17:23 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:31.714 ************************************ 00:04:31.714 END TEST devices 00:04:31.714 08:17:23 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:31.714 ************************************ 00:04:31.972 08:17:23 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:31.972 ************************************ 00:04:31.972 END TEST setup.sh 00:04:31.972 ************************************ 00:04:31.972 00:04:31.972 real 0m21.561s 00:04:31.972 user 0m6.944s 00:04:31.972 sys 0m8.940s 00:04:31.972 08:17:23 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:31.972 08:17:23 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:31.972 08:17:23 -- common/autotest_common.sh@1142 -- # return 0 00:04:31.972 08:17:23 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:32.538 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:32.538 Hugepages 00:04:32.538 node hugesize free / total 00:04:32.538 node0 1048576kB 0 / 0 00:04:32.538 node0 2048kB 2048 / 2048 00:04:32.538 00:04:32.538 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:32.538 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:32.796 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:32.796 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:04:32.796 08:17:24 -- spdk/autotest.sh@130 -- # uname -s 00:04:32.796 08:17:24 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:32.796 08:17:24 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:32.796 08:17:24 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:33.364 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:33.364 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:33.723 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:33.723 08:17:25 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:34.660 08:17:26 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:34.660 08:17:26 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:34.660 08:17:26 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:34.660 08:17:26 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:34.660 08:17:26 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:34.660 08:17:26 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:34.660 08:17:26 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:34.660 08:17:26 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:34.660 08:17:26 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:34.660 08:17:26 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:04:34.660 08:17:26 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:34.660 08:17:26 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:34.918 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:34.918 Waiting for block devices as requested 00:04:34.918 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:35.176 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:35.176 08:17:27 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:35.176 08:17:27 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:35.176 08:17:27 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:35.176 08:17:27 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:04:35.176 08:17:27 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:35.176 08:17:27 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:35.176 08:17:27 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:35.176 08:17:27 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:04:35.176 08:17:27 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:04:35.176 08:17:27 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:04:35.176 08:17:27 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:04:35.176 08:17:27 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:35.176 08:17:27 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:35.176 08:17:27 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:04:35.176 08:17:27 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:35.176 08:17:27 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:35.176 08:17:27 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 00:04:35.176 08:17:27 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:35.176 08:17:27 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:35.176 08:17:27 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:35.176 08:17:27 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:35.176 08:17:27 -- common/autotest_common.sh@1557 -- # continue 00:04:35.176 08:17:27 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:35.176 08:17:27 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:35.176 08:17:27 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:35.176 08:17:27 -- common/autotest_common.sh@1502 -- # grep 0000:00:11.0/nvme/nvme 00:04:35.176 08:17:27 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:35.176 08:17:27 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:35.176 08:17:27 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:35.176 08:17:27 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:35.176 08:17:27 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:35.176 08:17:27 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:35.176 08:17:27 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:35.176 08:17:27 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:35.176 08:17:27 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:35.176 08:17:27 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:04:35.176 08:17:27 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:35.176 08:17:27 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:35.176 08:17:27 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:35.176 08:17:27 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:35.176 08:17:27 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:35.176 08:17:27 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:35.176 08:17:27 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:35.176 08:17:27 -- common/autotest_common.sh@1557 -- # continue 00:04:35.176 08:17:27 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:35.176 08:17:27 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:35.176 08:17:27 -- common/autotest_common.sh@10 -- # set +x 00:04:35.176 08:17:27 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:35.176 08:17:27 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:35.176 08:17:27 -- common/autotest_common.sh@10 -- # set +x 00:04:35.176 08:17:27 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:36.111 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:36.111 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:36.111 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:36.111 08:17:28 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:36.111 08:17:28 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:36.111 08:17:28 -- common/autotest_common.sh@10 -- # set +x 00:04:36.111 08:17:28 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:36.111 08:17:28 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:36.111 08:17:28 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:36.111 08:17:28 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:36.111 08:17:28 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:36.111 08:17:28 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:36.111 08:17:28 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:36.111 08:17:28 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:36.111 08:17:28 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:36.111 08:17:28 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:36.111 08:17:28 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:36.111 08:17:28 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:04:36.111 08:17:28 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:36.111 08:17:28 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:36.111 08:17:28 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:36.111 08:17:28 -- common/autotest_common.sh@1580 -- # device=0x0010 00:04:36.111 08:17:28 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:36.111 08:17:28 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:36.111 08:17:28 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:36.111 08:17:28 -- common/autotest_common.sh@1580 -- # device=0x0010 00:04:36.111 08:17:28 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:36.111 08:17:28 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:04:36.111 08:17:28 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:04:36.111 08:17:28 -- common/autotest_common.sh@1593 -- # return 0 00:04:36.111 08:17:28 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:36.111 08:17:28 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:36.111 08:17:28 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:36.111 08:17:28 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:36.111 08:17:28 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:36.111 08:17:28 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:36.111 08:17:28 -- common/autotest_common.sh@10 -- # set +x 00:04:36.369 08:17:28 -- spdk/autotest.sh@164 -- # [[ 1 -eq 1 ]] 00:04:36.369 08:17:28 -- spdk/autotest.sh@165 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:04:36.369 08:17:28 -- spdk/autotest.sh@165 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:04:36.369 08:17:28 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:36.369 08:17:28 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:36.369 08:17:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:36.369 08:17:28 -- common/autotest_common.sh@10 -- # set +x 00:04:36.369 ************************************ 00:04:36.369 START TEST env 00:04:36.369 ************************************ 00:04:36.369 08:17:28 env -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:36.369 * Looking for test storage... 00:04:36.369 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:36.369 08:17:28 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:36.369 08:17:28 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:36.369 08:17:28 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:36.369 08:17:28 env -- common/autotest_common.sh@10 -- # set +x 00:04:36.369 ************************************ 00:04:36.369 START TEST env_memory 00:04:36.369 ************************************ 00:04:36.369 08:17:28 env.env_memory -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:36.369 00:04:36.369 00:04:36.369 CUnit - A unit testing framework for C - Version 2.1-3 00:04:36.369 http://cunit.sourceforge.net/ 00:04:36.369 00:04:36.369 00:04:36.369 Suite: memory 00:04:36.370 Test: alloc and free memory map ...[2024-07-15 08:17:28.427026] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:36.370 passed 00:04:36.370 Test: mem map translation ...[2024-07-15 08:17:28.458584] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:36.370 [2024-07-15 08:17:28.458925] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:36.370 [2024-07-15 08:17:28.459279] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:36.370 [2024-07-15 08:17:28.459613] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:36.370 passed 00:04:36.370 Test: mem map registration ...[2024-07-15 08:17:28.524730] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:36.370 [2024-07-15 08:17:28.525025] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:36.628 passed 00:04:36.628 Test: mem map adjacent registrations ...passed 00:04:36.628 00:04:36.628 Run Summary: Type Total Ran Passed Failed Inactive 00:04:36.628 suites 1 1 n/a 0 0 00:04:36.628 tests 4 4 4 0 0 00:04:36.628 asserts 152 152 152 0 n/a 00:04:36.628 00:04:36.628 Elapsed time = 0.219 seconds 00:04:36.628 ************************************ 00:04:36.628 END TEST env_memory 00:04:36.628 ************************************ 00:04:36.628 00:04:36.628 real 0m0.235s 00:04:36.628 user 0m0.220s 00:04:36.628 sys 0m0.010s 00:04:36.628 08:17:28 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:36.628 08:17:28 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:36.628 08:17:28 env -- common/autotest_common.sh@1142 -- # return 0 00:04:36.628 08:17:28 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:36.628 08:17:28 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:36.628 08:17:28 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:36.628 08:17:28 env -- common/autotest_common.sh@10 -- # set +x 00:04:36.628 ************************************ 00:04:36.628 START TEST env_vtophys 00:04:36.628 ************************************ 00:04:36.628 08:17:28 env.env_vtophys -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:36.628 EAL: lib.eal log level changed from notice to debug 00:04:36.628 EAL: Detected lcore 0 as core 0 on socket 0 00:04:36.628 EAL: Detected lcore 1 as core 0 on socket 0 00:04:36.628 EAL: Detected lcore 2 as core 0 on socket 0 00:04:36.628 EAL: Detected lcore 3 as core 0 on socket 0 00:04:36.628 EAL: Detected lcore 4 as core 0 on socket 0 00:04:36.628 EAL: Detected lcore 5 as core 0 on socket 0 00:04:36.628 EAL: Detected lcore 6 as core 0 on socket 0 00:04:36.628 EAL: Detected lcore 7 as core 0 on socket 0 00:04:36.628 EAL: Detected lcore 8 as core 0 on socket 0 00:04:36.628 EAL: Detected lcore 9 as core 0 on socket 0 00:04:36.628 EAL: Maximum logical cores by configuration: 128 00:04:36.628 EAL: Detected CPU lcores: 10 00:04:36.628 EAL: Detected NUMA nodes: 1 00:04:36.628 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:36.628 EAL: Detected shared linkage of DPDK 00:04:36.628 EAL: No shared files mode enabled, IPC will be disabled 00:04:36.628 EAL: Selected IOVA mode 'PA' 00:04:36.628 EAL: Probing VFIO support... 00:04:36.628 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:36.628 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:36.628 EAL: Ask a virtual area of 0x2e000 bytes 00:04:36.628 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:36.628 EAL: Setting up physically contiguous memory... 00:04:36.628 EAL: Setting maximum number of open files to 524288 00:04:36.628 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:36.628 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:36.628 EAL: Ask a virtual area of 0x61000 bytes 00:04:36.628 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:36.628 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:36.628 EAL: Ask a virtual area of 0x400000000 bytes 00:04:36.628 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:36.628 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:36.628 EAL: Ask a virtual area of 0x61000 bytes 00:04:36.628 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:36.628 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:36.628 EAL: Ask a virtual area of 0x400000000 bytes 00:04:36.628 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:36.628 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:36.628 EAL: Ask a virtual area of 0x61000 bytes 00:04:36.628 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:36.628 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:36.628 EAL: Ask a virtual area of 0x400000000 bytes 00:04:36.628 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:36.628 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:36.628 EAL: Ask a virtual area of 0x61000 bytes 00:04:36.628 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:36.628 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:36.628 EAL: Ask a virtual area of 0x400000000 bytes 00:04:36.628 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:36.628 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:36.628 EAL: Hugepages will be freed exactly as allocated. 00:04:36.628 EAL: No shared files mode enabled, IPC is disabled 00:04:36.628 EAL: No shared files mode enabled, IPC is disabled 00:04:36.886 EAL: TSC frequency is ~2200000 KHz 00:04:36.886 EAL: Main lcore 0 is ready (tid=7fea206a5a00;cpuset=[0]) 00:04:36.886 EAL: Trying to obtain current memory policy. 00:04:36.886 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:36.886 EAL: Restoring previous memory policy: 0 00:04:36.886 EAL: request: mp_malloc_sync 00:04:36.886 EAL: No shared files mode enabled, IPC is disabled 00:04:36.886 EAL: Heap on socket 0 was expanded by 2MB 00:04:36.886 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:36.886 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:36.886 EAL: Mem event callback 'spdk:(nil)' registered 00:04:36.886 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:36.886 00:04:36.886 00:04:36.886 CUnit - A unit testing framework for C - Version 2.1-3 00:04:36.886 http://cunit.sourceforge.net/ 00:04:36.886 00:04:36.886 00:04:36.886 Suite: components_suite 00:04:36.886 Test: vtophys_malloc_test ...passed 00:04:36.886 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:36.886 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:36.886 EAL: Restoring previous memory policy: 4 00:04:36.886 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.886 EAL: request: mp_malloc_sync 00:04:36.886 EAL: No shared files mode enabled, IPC is disabled 00:04:36.886 EAL: Heap on socket 0 was expanded by 4MB 00:04:36.886 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.886 EAL: request: mp_malloc_sync 00:04:36.886 EAL: No shared files mode enabled, IPC is disabled 00:04:36.886 EAL: Heap on socket 0 was shrunk by 4MB 00:04:36.886 EAL: Trying to obtain current memory policy. 00:04:36.886 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:36.886 EAL: Restoring previous memory policy: 4 00:04:36.886 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.886 EAL: request: mp_malloc_sync 00:04:36.886 EAL: No shared files mode enabled, IPC is disabled 00:04:36.886 EAL: Heap on socket 0 was expanded by 6MB 00:04:36.886 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.886 EAL: request: mp_malloc_sync 00:04:36.886 EAL: No shared files mode enabled, IPC is disabled 00:04:36.886 EAL: Heap on socket 0 was shrunk by 6MB 00:04:36.886 EAL: Trying to obtain current memory policy. 00:04:36.887 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:36.887 EAL: Restoring previous memory policy: 4 00:04:36.887 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.887 EAL: request: mp_malloc_sync 00:04:36.887 EAL: No shared files mode enabled, IPC is disabled 00:04:36.887 EAL: Heap on socket 0 was expanded by 10MB 00:04:36.887 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.887 EAL: request: mp_malloc_sync 00:04:36.887 EAL: No shared files mode enabled, IPC is disabled 00:04:36.887 EAL: Heap on socket 0 was shrunk by 10MB 00:04:36.887 EAL: Trying to obtain current memory policy. 00:04:36.887 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:36.887 EAL: Restoring previous memory policy: 4 00:04:36.887 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.887 EAL: request: mp_malloc_sync 00:04:36.887 EAL: No shared files mode enabled, IPC is disabled 00:04:36.887 EAL: Heap on socket 0 was expanded by 18MB 00:04:36.887 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.887 EAL: request: mp_malloc_sync 00:04:36.887 EAL: No shared files mode enabled, IPC is disabled 00:04:36.887 EAL: Heap on socket 0 was shrunk by 18MB 00:04:36.887 EAL: Trying to obtain current memory policy. 00:04:36.887 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:36.887 EAL: Restoring previous memory policy: 4 00:04:36.887 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.887 EAL: request: mp_malloc_sync 00:04:36.887 EAL: No shared files mode enabled, IPC is disabled 00:04:36.887 EAL: Heap on socket 0 was expanded by 34MB 00:04:36.887 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.887 EAL: request: mp_malloc_sync 00:04:36.887 EAL: No shared files mode enabled, IPC is disabled 00:04:36.887 EAL: Heap on socket 0 was shrunk by 34MB 00:04:36.887 EAL: Trying to obtain current memory policy. 00:04:36.887 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:36.887 EAL: Restoring previous memory policy: 4 00:04:36.887 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.887 EAL: request: mp_malloc_sync 00:04:36.887 EAL: No shared files mode enabled, IPC is disabled 00:04:36.887 EAL: Heap on socket 0 was expanded by 66MB 00:04:36.887 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.887 EAL: request: mp_malloc_sync 00:04:36.887 EAL: No shared files mode enabled, IPC is disabled 00:04:36.887 EAL: Heap on socket 0 was shrunk by 66MB 00:04:36.887 EAL: Trying to obtain current memory policy. 00:04:36.887 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:36.887 EAL: Restoring previous memory policy: 4 00:04:36.887 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.887 EAL: request: mp_malloc_sync 00:04:36.887 EAL: No shared files mode enabled, IPC is disabled 00:04:36.887 EAL: Heap on socket 0 was expanded by 130MB 00:04:36.887 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.887 EAL: request: mp_malloc_sync 00:04:36.887 EAL: No shared files mode enabled, IPC is disabled 00:04:36.887 EAL: Heap on socket 0 was shrunk by 130MB 00:04:36.887 EAL: Trying to obtain current memory policy. 00:04:36.887 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:36.887 EAL: Restoring previous memory policy: 4 00:04:36.887 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.887 EAL: request: mp_malloc_sync 00:04:36.887 EAL: No shared files mode enabled, IPC is disabled 00:04:36.887 EAL: Heap on socket 0 was expanded by 258MB 00:04:37.144 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.144 EAL: request: mp_malloc_sync 00:04:37.144 EAL: No shared files mode enabled, IPC is disabled 00:04:37.144 EAL: Heap on socket 0 was shrunk by 258MB 00:04:37.144 EAL: Trying to obtain current memory policy. 00:04:37.144 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:37.144 EAL: Restoring previous memory policy: 4 00:04:37.144 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.144 EAL: request: mp_malloc_sync 00:04:37.144 EAL: No shared files mode enabled, IPC is disabled 00:04:37.144 EAL: Heap on socket 0 was expanded by 514MB 00:04:37.402 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.402 EAL: request: mp_malloc_sync 00:04:37.402 EAL: No shared files mode enabled, IPC is disabled 00:04:37.402 EAL: Heap on socket 0 was shrunk by 514MB 00:04:37.402 EAL: Trying to obtain current memory policy. 00:04:37.402 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:37.667 EAL: Restoring previous memory policy: 4 00:04:37.667 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.667 EAL: request: mp_malloc_sync 00:04:37.667 EAL: No shared files mode enabled, IPC is disabled 00:04:37.667 EAL: Heap on socket 0 was expanded by 1026MB 00:04:37.924 EAL: Calling mem event callback 'spdk:(nil)' 00:04:38.182 passed 00:04:38.182 00:04:38.182 Run Summary: Type Total Ran Passed Failed Inactive 00:04:38.182 suites 1 1 n/a 0 0 00:04:38.182 tests 2 2 2 0 0 00:04:38.182 asserts 5316 5316 5316 0 n/a 00:04:38.182 00:04:38.182 Elapsed time = 1.238 seconds 00:04:38.182 EAL: request: mp_malloc_sync 00:04:38.182 EAL: No shared files mode enabled, IPC is disabled 00:04:38.182 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:38.182 EAL: Calling mem event callback 'spdk:(nil)' 00:04:38.182 EAL: request: mp_malloc_sync 00:04:38.182 EAL: No shared files mode enabled, IPC is disabled 00:04:38.182 EAL: Heap on socket 0 was shrunk by 2MB 00:04:38.182 EAL: No shared files mode enabled, IPC is disabled 00:04:38.182 EAL: No shared files mode enabled, IPC is disabled 00:04:38.182 EAL: No shared files mode enabled, IPC is disabled 00:04:38.182 ************************************ 00:04:38.182 END TEST env_vtophys 00:04:38.182 ************************************ 00:04:38.182 00:04:38.182 real 0m1.440s 00:04:38.182 user 0m0.784s 00:04:38.182 sys 0m0.516s 00:04:38.182 08:17:30 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:38.182 08:17:30 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:38.182 08:17:30 env -- common/autotest_common.sh@1142 -- # return 0 00:04:38.182 08:17:30 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:38.182 08:17:30 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:38.182 08:17:30 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:38.182 08:17:30 env -- common/autotest_common.sh@10 -- # set +x 00:04:38.182 ************************************ 00:04:38.182 START TEST env_pci 00:04:38.182 ************************************ 00:04:38.182 08:17:30 env.env_pci -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:38.182 00:04:38.182 00:04:38.182 CUnit - A unit testing framework for C - Version 2.1-3 00:04:38.182 http://cunit.sourceforge.net/ 00:04:38.182 00:04:38.182 00:04:38.182 Suite: pci 00:04:38.182 Test: pci_hook ...[2024-07-15 08:17:30.174402] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 58677 has claimed it 00:04:38.182 passed 00:04:38.182 00:04:38.182 Run Summary: Type Total Ran Passed Failed Inactive 00:04:38.182 suites 1 1 n/a 0 0 00:04:38.183 tests 1 1 1 0 0 00:04:38.183 asserts 25 25 25 0 n/a 00:04:38.183 00:04:38.183 Elapsed time = 0.002 seconds 00:04:38.183 EAL: Cannot find device (10000:00:01.0) 00:04:38.183 EAL: Failed to attach device on primary process 00:04:38.183 00:04:38.183 real 0m0.021s 00:04:38.183 user 0m0.015s 00:04:38.183 sys 0m0.006s 00:04:38.183 ************************************ 00:04:38.183 END TEST env_pci 00:04:38.183 ************************************ 00:04:38.183 08:17:30 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:38.183 08:17:30 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:38.183 08:17:30 env -- common/autotest_common.sh@1142 -- # return 0 00:04:38.183 08:17:30 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:38.183 08:17:30 env -- env/env.sh@15 -- # uname 00:04:38.183 08:17:30 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:38.183 08:17:30 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:38.183 08:17:30 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:38.183 08:17:30 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:04:38.183 08:17:30 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:38.183 08:17:30 env -- common/autotest_common.sh@10 -- # set +x 00:04:38.183 ************************************ 00:04:38.183 START TEST env_dpdk_post_init 00:04:38.183 ************************************ 00:04:38.183 08:17:30 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:38.183 EAL: Detected CPU lcores: 10 00:04:38.183 EAL: Detected NUMA nodes: 1 00:04:38.183 EAL: Detected shared linkage of DPDK 00:04:38.183 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:38.183 EAL: Selected IOVA mode 'PA' 00:04:38.440 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:38.440 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:38.440 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:38.440 Starting DPDK initialization... 00:04:38.440 Starting SPDK post initialization... 00:04:38.440 SPDK NVMe probe 00:04:38.440 Attaching to 0000:00:10.0 00:04:38.440 Attaching to 0000:00:11.0 00:04:38.440 Attached to 0000:00:10.0 00:04:38.440 Attached to 0000:00:11.0 00:04:38.440 Cleaning up... 00:04:38.440 ************************************ 00:04:38.440 END TEST env_dpdk_post_init 00:04:38.440 ************************************ 00:04:38.440 00:04:38.440 real 0m0.184s 00:04:38.440 user 0m0.051s 00:04:38.440 sys 0m0.033s 00:04:38.440 08:17:30 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:38.440 08:17:30 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:38.440 08:17:30 env -- common/autotest_common.sh@1142 -- # return 0 00:04:38.440 08:17:30 env -- env/env.sh@26 -- # uname 00:04:38.440 08:17:30 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:38.440 08:17:30 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:38.440 08:17:30 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:38.440 08:17:30 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:38.440 08:17:30 env -- common/autotest_common.sh@10 -- # set +x 00:04:38.440 ************************************ 00:04:38.440 START TEST env_mem_callbacks 00:04:38.440 ************************************ 00:04:38.440 08:17:30 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:38.440 EAL: Detected CPU lcores: 10 00:04:38.440 EAL: Detected NUMA nodes: 1 00:04:38.440 EAL: Detected shared linkage of DPDK 00:04:38.440 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:38.440 EAL: Selected IOVA mode 'PA' 00:04:38.440 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:38.440 00:04:38.440 00:04:38.440 CUnit - A unit testing framework for C - Version 2.1-3 00:04:38.440 http://cunit.sourceforge.net/ 00:04:38.440 00:04:38.440 00:04:38.440 Suite: memory 00:04:38.440 Test: test ... 00:04:38.440 register 0x200000200000 2097152 00:04:38.440 malloc 3145728 00:04:38.440 register 0x200000400000 4194304 00:04:38.440 buf 0x200000500000 len 3145728 PASSED 00:04:38.440 malloc 64 00:04:38.440 buf 0x2000004fff40 len 64 PASSED 00:04:38.440 malloc 4194304 00:04:38.440 register 0x200000800000 6291456 00:04:38.440 buf 0x200000a00000 len 4194304 PASSED 00:04:38.440 free 0x200000500000 3145728 00:04:38.440 free 0x2000004fff40 64 00:04:38.440 unregister 0x200000400000 4194304 PASSED 00:04:38.440 free 0x200000a00000 4194304 00:04:38.440 unregister 0x200000800000 6291456 PASSED 00:04:38.440 malloc 8388608 00:04:38.698 register 0x200000400000 10485760 00:04:38.698 buf 0x200000600000 len 8388608 PASSED 00:04:38.698 free 0x200000600000 8388608 00:04:38.698 unregister 0x200000400000 10485760 PASSED 00:04:38.698 passed 00:04:38.698 00:04:38.698 Run Summary: Type Total Ran Passed Failed Inactive 00:04:38.698 suites 1 1 n/a 0 0 00:04:38.698 tests 1 1 1 0 0 00:04:38.698 asserts 15 15 15 0 n/a 00:04:38.698 00:04:38.698 Elapsed time = 0.009 seconds 00:04:38.698 ************************************ 00:04:38.698 END TEST env_mem_callbacks 00:04:38.698 ************************************ 00:04:38.698 00:04:38.698 real 0m0.141s 00:04:38.698 user 0m0.015s 00:04:38.698 sys 0m0.025s 00:04:38.698 08:17:30 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:38.698 08:17:30 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:38.698 08:17:30 env -- common/autotest_common.sh@1142 -- # return 0 00:04:38.698 ************************************ 00:04:38.698 END TEST env 00:04:38.698 ************************************ 00:04:38.698 00:04:38.698 real 0m2.358s 00:04:38.698 user 0m1.204s 00:04:38.698 sys 0m0.799s 00:04:38.698 08:17:30 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:38.698 08:17:30 env -- common/autotest_common.sh@10 -- # set +x 00:04:38.698 08:17:30 -- common/autotest_common.sh@1142 -- # return 0 00:04:38.698 08:17:30 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:38.698 08:17:30 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:38.698 08:17:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:38.698 08:17:30 -- common/autotest_common.sh@10 -- # set +x 00:04:38.698 ************************************ 00:04:38.698 START TEST rpc 00:04:38.698 ************************************ 00:04:38.698 08:17:30 rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:38.698 * Looking for test storage... 00:04:38.698 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:38.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:38.698 08:17:30 rpc -- rpc/rpc.sh@65 -- # spdk_pid=58792 00:04:38.698 08:17:30 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:38.698 08:17:30 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:38.698 08:17:30 rpc -- rpc/rpc.sh@67 -- # waitforlisten 58792 00:04:38.698 08:17:30 rpc -- common/autotest_common.sh@829 -- # '[' -z 58792 ']' 00:04:38.698 08:17:30 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:38.698 08:17:30 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:38.698 08:17:30 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:38.698 08:17:30 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:38.698 08:17:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.698 [2024-07-15 08:17:30.839246] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:04:38.698 [2024-07-15 08:17:30.839573] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58792 ] 00:04:38.975 [2024-07-15 08:17:30.980520] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.975 [2024-07-15 08:17:31.109483] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:38.975 [2024-07-15 08:17:31.109818] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58792' to capture a snapshot of events at runtime. 00:04:38.975 [2024-07-15 08:17:31.109986] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:38.975 [2024-07-15 08:17:31.110126] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:38.975 [2024-07-15 08:17:31.110175] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58792 for offline analysis/debug. 00:04:38.975 [2024-07-15 08:17:31.110214] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.233 [2024-07-15 08:17:31.166733] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:39.801 08:17:31 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:39.801 08:17:31 rpc -- common/autotest_common.sh@862 -- # return 0 00:04:39.801 08:17:31 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:39.801 08:17:31 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:39.801 08:17:31 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:39.801 08:17:31 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:39.801 08:17:31 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:39.801 08:17:31 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:39.801 08:17:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.801 ************************************ 00:04:39.801 START TEST rpc_integrity 00:04:39.801 ************************************ 00:04:39.801 08:17:31 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:39.801 08:17:31 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:39.801 08:17:31 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:39.801 08:17:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.801 08:17:31 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:39.801 08:17:31 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:39.801 08:17:31 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:39.801 08:17:31 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:39.801 08:17:31 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:39.801 08:17:31 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:39.801 08:17:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.801 08:17:31 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:39.801 08:17:31 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:39.801 08:17:31 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:39.801 08:17:31 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:39.801 08:17:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.801 08:17:31 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:39.801 08:17:31 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:39.801 { 00:04:39.801 "name": "Malloc0", 00:04:39.801 "aliases": [ 00:04:39.801 "941afcb8-8440-4e98-a5c6-2040b18b4852" 00:04:39.801 ], 00:04:39.801 "product_name": "Malloc disk", 00:04:39.801 "block_size": 512, 00:04:39.801 "num_blocks": 16384, 00:04:39.801 "uuid": "941afcb8-8440-4e98-a5c6-2040b18b4852", 00:04:39.801 "assigned_rate_limits": { 00:04:39.801 "rw_ios_per_sec": 0, 00:04:39.801 "rw_mbytes_per_sec": 0, 00:04:39.801 "r_mbytes_per_sec": 0, 00:04:39.801 "w_mbytes_per_sec": 0 00:04:39.801 }, 00:04:39.801 "claimed": false, 00:04:39.801 "zoned": false, 00:04:39.801 "supported_io_types": { 00:04:39.801 "read": true, 00:04:39.801 "write": true, 00:04:39.801 "unmap": true, 00:04:39.801 "flush": true, 00:04:39.802 "reset": true, 00:04:39.802 "nvme_admin": false, 00:04:39.802 "nvme_io": false, 00:04:39.802 "nvme_io_md": false, 00:04:39.802 "write_zeroes": true, 00:04:39.802 "zcopy": true, 00:04:39.802 "get_zone_info": false, 00:04:39.802 "zone_management": false, 00:04:39.802 "zone_append": false, 00:04:39.802 "compare": false, 00:04:39.802 "compare_and_write": false, 00:04:39.802 "abort": true, 00:04:39.802 "seek_hole": false, 00:04:39.802 "seek_data": false, 00:04:39.802 "copy": true, 00:04:39.802 "nvme_iov_md": false 00:04:39.802 }, 00:04:39.802 "memory_domains": [ 00:04:39.802 { 00:04:39.802 "dma_device_id": "system", 00:04:39.802 "dma_device_type": 1 00:04:39.802 }, 00:04:39.802 { 00:04:39.802 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:39.802 "dma_device_type": 2 00:04:39.802 } 00:04:39.802 ], 00:04:39.802 "driver_specific": {} 00:04:39.802 } 00:04:39.802 ]' 00:04:39.802 08:17:31 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:40.060 08:17:31 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:40.060 08:17:31 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:40.060 08:17:31 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:40.060 08:17:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.060 [2024-07-15 08:17:32.004789] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:40.060 [2024-07-15 08:17:32.004855] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:40.060 [2024-07-15 08:17:32.004878] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xa66da0 00:04:40.060 [2024-07-15 08:17:32.004888] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:40.060 [2024-07-15 08:17:32.006693] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:40.060 [2024-07-15 08:17:32.006743] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:40.060 Passthru0 00:04:40.060 08:17:32 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:40.060 08:17:32 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:40.060 08:17:32 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:40.060 08:17:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.060 08:17:32 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:40.060 08:17:32 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:40.060 { 00:04:40.060 "name": "Malloc0", 00:04:40.060 "aliases": [ 00:04:40.060 "941afcb8-8440-4e98-a5c6-2040b18b4852" 00:04:40.060 ], 00:04:40.060 "product_name": "Malloc disk", 00:04:40.060 "block_size": 512, 00:04:40.060 "num_blocks": 16384, 00:04:40.060 "uuid": "941afcb8-8440-4e98-a5c6-2040b18b4852", 00:04:40.060 "assigned_rate_limits": { 00:04:40.060 "rw_ios_per_sec": 0, 00:04:40.060 "rw_mbytes_per_sec": 0, 00:04:40.060 "r_mbytes_per_sec": 0, 00:04:40.060 "w_mbytes_per_sec": 0 00:04:40.060 }, 00:04:40.060 "claimed": true, 00:04:40.060 "claim_type": "exclusive_write", 00:04:40.060 "zoned": false, 00:04:40.060 "supported_io_types": { 00:04:40.060 "read": true, 00:04:40.060 "write": true, 00:04:40.060 "unmap": true, 00:04:40.060 "flush": true, 00:04:40.060 "reset": true, 00:04:40.060 "nvme_admin": false, 00:04:40.060 "nvme_io": false, 00:04:40.060 "nvme_io_md": false, 00:04:40.060 "write_zeroes": true, 00:04:40.060 "zcopy": true, 00:04:40.060 "get_zone_info": false, 00:04:40.060 "zone_management": false, 00:04:40.060 "zone_append": false, 00:04:40.060 "compare": false, 00:04:40.061 "compare_and_write": false, 00:04:40.061 "abort": true, 00:04:40.061 "seek_hole": false, 00:04:40.061 "seek_data": false, 00:04:40.061 "copy": true, 00:04:40.061 "nvme_iov_md": false 00:04:40.061 }, 00:04:40.061 "memory_domains": [ 00:04:40.061 { 00:04:40.061 "dma_device_id": "system", 00:04:40.061 "dma_device_type": 1 00:04:40.061 }, 00:04:40.061 { 00:04:40.061 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:40.061 "dma_device_type": 2 00:04:40.061 } 00:04:40.061 ], 00:04:40.061 "driver_specific": {} 00:04:40.061 }, 00:04:40.061 { 00:04:40.061 "name": "Passthru0", 00:04:40.061 "aliases": [ 00:04:40.061 "3dfe1208-86fe-5a67-baa0-4f016506d414" 00:04:40.061 ], 00:04:40.061 "product_name": "passthru", 00:04:40.061 "block_size": 512, 00:04:40.061 "num_blocks": 16384, 00:04:40.061 "uuid": "3dfe1208-86fe-5a67-baa0-4f016506d414", 00:04:40.061 "assigned_rate_limits": { 00:04:40.061 "rw_ios_per_sec": 0, 00:04:40.061 "rw_mbytes_per_sec": 0, 00:04:40.061 "r_mbytes_per_sec": 0, 00:04:40.061 "w_mbytes_per_sec": 0 00:04:40.061 }, 00:04:40.061 "claimed": false, 00:04:40.061 "zoned": false, 00:04:40.061 "supported_io_types": { 00:04:40.061 "read": true, 00:04:40.061 "write": true, 00:04:40.061 "unmap": true, 00:04:40.061 "flush": true, 00:04:40.061 "reset": true, 00:04:40.061 "nvme_admin": false, 00:04:40.061 "nvme_io": false, 00:04:40.061 "nvme_io_md": false, 00:04:40.061 "write_zeroes": true, 00:04:40.061 "zcopy": true, 00:04:40.061 "get_zone_info": false, 00:04:40.061 "zone_management": false, 00:04:40.061 "zone_append": false, 00:04:40.061 "compare": false, 00:04:40.061 "compare_and_write": false, 00:04:40.061 "abort": true, 00:04:40.061 "seek_hole": false, 00:04:40.061 "seek_data": false, 00:04:40.061 "copy": true, 00:04:40.061 "nvme_iov_md": false 00:04:40.061 }, 00:04:40.061 "memory_domains": [ 00:04:40.061 { 00:04:40.061 "dma_device_id": "system", 00:04:40.061 "dma_device_type": 1 00:04:40.061 }, 00:04:40.061 { 00:04:40.061 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:40.061 "dma_device_type": 2 00:04:40.061 } 00:04:40.061 ], 00:04:40.061 "driver_specific": { 00:04:40.061 "passthru": { 00:04:40.061 "name": "Passthru0", 00:04:40.061 "base_bdev_name": "Malloc0" 00:04:40.061 } 00:04:40.061 } 00:04:40.061 } 00:04:40.061 ]' 00:04:40.061 08:17:32 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:40.061 08:17:32 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:40.061 08:17:32 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:40.061 08:17:32 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:40.061 08:17:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.061 08:17:32 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:40.061 08:17:32 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:40.061 08:17:32 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:40.061 08:17:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.061 08:17:32 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:40.061 08:17:32 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:40.061 08:17:32 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:40.061 08:17:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.061 08:17:32 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:40.061 08:17:32 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:40.061 08:17:32 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:40.061 ************************************ 00:04:40.061 END TEST rpc_integrity 00:04:40.061 ************************************ 00:04:40.061 08:17:32 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:40.061 00:04:40.061 real 0m0.333s 00:04:40.061 user 0m0.224s 00:04:40.061 sys 0m0.044s 00:04:40.061 08:17:32 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:40.061 08:17:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.061 08:17:32 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:40.061 08:17:32 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:40.061 08:17:32 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:40.061 08:17:32 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:40.061 08:17:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.061 ************************************ 00:04:40.061 START TEST rpc_plugins 00:04:40.061 ************************************ 00:04:40.061 08:17:32 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:04:40.061 08:17:32 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:40.061 08:17:32 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:40.061 08:17:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:40.320 08:17:32 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:40.320 08:17:32 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:40.320 08:17:32 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:40.320 08:17:32 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:40.320 08:17:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:40.320 08:17:32 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:40.320 08:17:32 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:40.320 { 00:04:40.320 "name": "Malloc1", 00:04:40.320 "aliases": [ 00:04:40.320 "5fd38172-178d-4af7-8f3e-e6dc2d7992b2" 00:04:40.320 ], 00:04:40.320 "product_name": "Malloc disk", 00:04:40.320 "block_size": 4096, 00:04:40.320 "num_blocks": 256, 00:04:40.320 "uuid": "5fd38172-178d-4af7-8f3e-e6dc2d7992b2", 00:04:40.320 "assigned_rate_limits": { 00:04:40.320 "rw_ios_per_sec": 0, 00:04:40.320 "rw_mbytes_per_sec": 0, 00:04:40.320 "r_mbytes_per_sec": 0, 00:04:40.320 "w_mbytes_per_sec": 0 00:04:40.320 }, 00:04:40.320 "claimed": false, 00:04:40.320 "zoned": false, 00:04:40.320 "supported_io_types": { 00:04:40.320 "read": true, 00:04:40.320 "write": true, 00:04:40.320 "unmap": true, 00:04:40.320 "flush": true, 00:04:40.320 "reset": true, 00:04:40.320 "nvme_admin": false, 00:04:40.320 "nvme_io": false, 00:04:40.320 "nvme_io_md": false, 00:04:40.320 "write_zeroes": true, 00:04:40.320 "zcopy": true, 00:04:40.320 "get_zone_info": false, 00:04:40.320 "zone_management": false, 00:04:40.320 "zone_append": false, 00:04:40.320 "compare": false, 00:04:40.320 "compare_and_write": false, 00:04:40.320 "abort": true, 00:04:40.320 "seek_hole": false, 00:04:40.320 "seek_data": false, 00:04:40.320 "copy": true, 00:04:40.320 "nvme_iov_md": false 00:04:40.320 }, 00:04:40.320 "memory_domains": [ 00:04:40.320 { 00:04:40.320 "dma_device_id": "system", 00:04:40.320 "dma_device_type": 1 00:04:40.320 }, 00:04:40.320 { 00:04:40.320 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:40.320 "dma_device_type": 2 00:04:40.320 } 00:04:40.320 ], 00:04:40.320 "driver_specific": {} 00:04:40.320 } 00:04:40.320 ]' 00:04:40.320 08:17:32 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:40.320 08:17:32 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:40.320 08:17:32 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:40.320 08:17:32 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:40.320 08:17:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:40.320 08:17:32 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:40.320 08:17:32 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:40.320 08:17:32 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:40.320 08:17:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:40.320 08:17:32 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:40.320 08:17:32 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:40.320 08:17:32 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:40.320 ************************************ 00:04:40.320 END TEST rpc_plugins 00:04:40.320 ************************************ 00:04:40.320 08:17:32 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:40.320 00:04:40.320 real 0m0.166s 00:04:40.320 user 0m0.108s 00:04:40.320 sys 0m0.020s 00:04:40.320 08:17:32 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:40.320 08:17:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:40.320 08:17:32 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:40.320 08:17:32 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:40.320 08:17:32 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:40.320 08:17:32 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:40.320 08:17:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.320 ************************************ 00:04:40.320 START TEST rpc_trace_cmd_test 00:04:40.320 ************************************ 00:04:40.320 08:17:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:04:40.320 08:17:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:40.320 08:17:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:40.320 08:17:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:40.320 08:17:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:40.320 08:17:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:40.320 08:17:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:40.320 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58792", 00:04:40.320 "tpoint_group_mask": "0x8", 00:04:40.320 "iscsi_conn": { 00:04:40.320 "mask": "0x2", 00:04:40.320 "tpoint_mask": "0x0" 00:04:40.320 }, 00:04:40.320 "scsi": { 00:04:40.321 "mask": "0x4", 00:04:40.321 "tpoint_mask": "0x0" 00:04:40.321 }, 00:04:40.321 "bdev": { 00:04:40.321 "mask": "0x8", 00:04:40.321 "tpoint_mask": "0xffffffffffffffff" 00:04:40.321 }, 00:04:40.321 "nvmf_rdma": { 00:04:40.321 "mask": "0x10", 00:04:40.321 "tpoint_mask": "0x0" 00:04:40.321 }, 00:04:40.321 "nvmf_tcp": { 00:04:40.321 "mask": "0x20", 00:04:40.321 "tpoint_mask": "0x0" 00:04:40.321 }, 00:04:40.321 "ftl": { 00:04:40.321 "mask": "0x40", 00:04:40.321 "tpoint_mask": "0x0" 00:04:40.321 }, 00:04:40.321 "blobfs": { 00:04:40.321 "mask": "0x80", 00:04:40.321 "tpoint_mask": "0x0" 00:04:40.321 }, 00:04:40.321 "dsa": { 00:04:40.321 "mask": "0x200", 00:04:40.321 "tpoint_mask": "0x0" 00:04:40.321 }, 00:04:40.321 "thread": { 00:04:40.321 "mask": "0x400", 00:04:40.321 "tpoint_mask": "0x0" 00:04:40.321 }, 00:04:40.321 "nvme_pcie": { 00:04:40.321 "mask": "0x800", 00:04:40.321 "tpoint_mask": "0x0" 00:04:40.321 }, 00:04:40.321 "iaa": { 00:04:40.321 "mask": "0x1000", 00:04:40.321 "tpoint_mask": "0x0" 00:04:40.321 }, 00:04:40.321 "nvme_tcp": { 00:04:40.321 "mask": "0x2000", 00:04:40.321 "tpoint_mask": "0x0" 00:04:40.321 }, 00:04:40.321 "bdev_nvme": { 00:04:40.321 "mask": "0x4000", 00:04:40.321 "tpoint_mask": "0x0" 00:04:40.321 }, 00:04:40.321 "sock": { 00:04:40.321 "mask": "0x8000", 00:04:40.321 "tpoint_mask": "0x0" 00:04:40.321 } 00:04:40.321 }' 00:04:40.321 08:17:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:40.579 08:17:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:40.579 08:17:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:40.579 08:17:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:40.579 08:17:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:40.579 08:17:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:40.579 08:17:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:40.579 08:17:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:40.579 08:17:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:40.579 ************************************ 00:04:40.579 END TEST rpc_trace_cmd_test 00:04:40.579 ************************************ 00:04:40.579 08:17:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:40.579 00:04:40.579 real 0m0.264s 00:04:40.579 user 0m0.229s 00:04:40.579 sys 0m0.027s 00:04:40.579 08:17:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:40.579 08:17:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:40.579 08:17:32 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:40.579 08:17:32 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:40.579 08:17:32 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:40.579 08:17:32 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:40.579 08:17:32 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:40.579 08:17:32 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:40.579 08:17:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.579 ************************************ 00:04:40.579 START TEST rpc_daemon_integrity 00:04:40.579 ************************************ 00:04:40.579 08:17:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:40.579 08:17:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:40.579 08:17:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:40.579 08:17:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.839 08:17:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:40.839 08:17:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:40.839 08:17:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:40.839 08:17:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:40.839 08:17:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:40.839 08:17:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:40.839 08:17:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.839 08:17:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:40.839 08:17:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:40.839 08:17:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:40.839 08:17:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:40.839 08:17:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.839 08:17:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:40.839 08:17:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:40.839 { 00:04:40.839 "name": "Malloc2", 00:04:40.839 "aliases": [ 00:04:40.839 "f7c101df-f7e9-43b8-8f83-8c16db960d85" 00:04:40.839 ], 00:04:40.839 "product_name": "Malloc disk", 00:04:40.839 "block_size": 512, 00:04:40.839 "num_blocks": 16384, 00:04:40.839 "uuid": "f7c101df-f7e9-43b8-8f83-8c16db960d85", 00:04:40.839 "assigned_rate_limits": { 00:04:40.839 "rw_ios_per_sec": 0, 00:04:40.839 "rw_mbytes_per_sec": 0, 00:04:40.839 "r_mbytes_per_sec": 0, 00:04:40.839 "w_mbytes_per_sec": 0 00:04:40.839 }, 00:04:40.839 "claimed": false, 00:04:40.839 "zoned": false, 00:04:40.839 "supported_io_types": { 00:04:40.839 "read": true, 00:04:40.839 "write": true, 00:04:40.839 "unmap": true, 00:04:40.839 "flush": true, 00:04:40.839 "reset": true, 00:04:40.839 "nvme_admin": false, 00:04:40.839 "nvme_io": false, 00:04:40.839 "nvme_io_md": false, 00:04:40.839 "write_zeroes": true, 00:04:40.839 "zcopy": true, 00:04:40.839 "get_zone_info": false, 00:04:40.839 "zone_management": false, 00:04:40.839 "zone_append": false, 00:04:40.839 "compare": false, 00:04:40.839 "compare_and_write": false, 00:04:40.839 "abort": true, 00:04:40.839 "seek_hole": false, 00:04:40.839 "seek_data": false, 00:04:40.839 "copy": true, 00:04:40.839 "nvme_iov_md": false 00:04:40.839 }, 00:04:40.839 "memory_domains": [ 00:04:40.839 { 00:04:40.839 "dma_device_id": "system", 00:04:40.839 "dma_device_type": 1 00:04:40.839 }, 00:04:40.839 { 00:04:40.839 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:40.839 "dma_device_type": 2 00:04:40.839 } 00:04:40.839 ], 00:04:40.839 "driver_specific": {} 00:04:40.839 } 00:04:40.839 ]' 00:04:40.839 08:17:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:40.839 08:17:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:40.839 08:17:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:40.839 08:17:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:40.839 08:17:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.839 [2024-07-15 08:17:32.893340] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:40.839 [2024-07-15 08:17:32.893395] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:40.839 [2024-07-15 08:17:32.893418] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xacbbe0 00:04:40.839 [2024-07-15 08:17:32.893427] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:40.839 [2024-07-15 08:17:32.895107] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:40.839 [2024-07-15 08:17:32.895150] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:40.839 Passthru0 00:04:40.839 08:17:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:40.839 08:17:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:40.839 08:17:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:40.839 08:17:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.839 08:17:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:40.839 08:17:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:40.839 { 00:04:40.839 "name": "Malloc2", 00:04:40.839 "aliases": [ 00:04:40.839 "f7c101df-f7e9-43b8-8f83-8c16db960d85" 00:04:40.839 ], 00:04:40.839 "product_name": "Malloc disk", 00:04:40.839 "block_size": 512, 00:04:40.839 "num_blocks": 16384, 00:04:40.839 "uuid": "f7c101df-f7e9-43b8-8f83-8c16db960d85", 00:04:40.839 "assigned_rate_limits": { 00:04:40.839 "rw_ios_per_sec": 0, 00:04:40.839 "rw_mbytes_per_sec": 0, 00:04:40.839 "r_mbytes_per_sec": 0, 00:04:40.839 "w_mbytes_per_sec": 0 00:04:40.839 }, 00:04:40.839 "claimed": true, 00:04:40.839 "claim_type": "exclusive_write", 00:04:40.839 "zoned": false, 00:04:40.839 "supported_io_types": { 00:04:40.839 "read": true, 00:04:40.839 "write": true, 00:04:40.839 "unmap": true, 00:04:40.839 "flush": true, 00:04:40.839 "reset": true, 00:04:40.839 "nvme_admin": false, 00:04:40.839 "nvme_io": false, 00:04:40.839 "nvme_io_md": false, 00:04:40.839 "write_zeroes": true, 00:04:40.839 "zcopy": true, 00:04:40.839 "get_zone_info": false, 00:04:40.839 "zone_management": false, 00:04:40.839 "zone_append": false, 00:04:40.839 "compare": false, 00:04:40.839 "compare_and_write": false, 00:04:40.839 "abort": true, 00:04:40.839 "seek_hole": false, 00:04:40.839 "seek_data": false, 00:04:40.839 "copy": true, 00:04:40.839 "nvme_iov_md": false 00:04:40.839 }, 00:04:40.839 "memory_domains": [ 00:04:40.839 { 00:04:40.839 "dma_device_id": "system", 00:04:40.839 "dma_device_type": 1 00:04:40.839 }, 00:04:40.839 { 00:04:40.839 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:40.839 "dma_device_type": 2 00:04:40.839 } 00:04:40.839 ], 00:04:40.839 "driver_specific": {} 00:04:40.839 }, 00:04:40.839 { 00:04:40.839 "name": "Passthru0", 00:04:40.839 "aliases": [ 00:04:40.839 "ccb28655-b915-5c7d-af69-afd21b27716e" 00:04:40.839 ], 00:04:40.839 "product_name": "passthru", 00:04:40.839 "block_size": 512, 00:04:40.839 "num_blocks": 16384, 00:04:40.839 "uuid": "ccb28655-b915-5c7d-af69-afd21b27716e", 00:04:40.839 "assigned_rate_limits": { 00:04:40.839 "rw_ios_per_sec": 0, 00:04:40.839 "rw_mbytes_per_sec": 0, 00:04:40.839 "r_mbytes_per_sec": 0, 00:04:40.839 "w_mbytes_per_sec": 0 00:04:40.839 }, 00:04:40.839 "claimed": false, 00:04:40.839 "zoned": false, 00:04:40.839 "supported_io_types": { 00:04:40.839 "read": true, 00:04:40.839 "write": true, 00:04:40.839 "unmap": true, 00:04:40.839 "flush": true, 00:04:40.839 "reset": true, 00:04:40.839 "nvme_admin": false, 00:04:40.839 "nvme_io": false, 00:04:40.839 "nvme_io_md": false, 00:04:40.839 "write_zeroes": true, 00:04:40.839 "zcopy": true, 00:04:40.839 "get_zone_info": false, 00:04:40.839 "zone_management": false, 00:04:40.839 "zone_append": false, 00:04:40.839 "compare": false, 00:04:40.839 "compare_and_write": false, 00:04:40.839 "abort": true, 00:04:40.839 "seek_hole": false, 00:04:40.839 "seek_data": false, 00:04:40.839 "copy": true, 00:04:40.839 "nvme_iov_md": false 00:04:40.839 }, 00:04:40.839 "memory_domains": [ 00:04:40.839 { 00:04:40.839 "dma_device_id": "system", 00:04:40.839 "dma_device_type": 1 00:04:40.839 }, 00:04:40.839 { 00:04:40.839 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:40.839 "dma_device_type": 2 00:04:40.839 } 00:04:40.839 ], 00:04:40.839 "driver_specific": { 00:04:40.839 "passthru": { 00:04:40.839 "name": "Passthru0", 00:04:40.839 "base_bdev_name": "Malloc2" 00:04:40.839 } 00:04:40.839 } 00:04:40.839 } 00:04:40.839 ]' 00:04:40.839 08:17:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:40.839 08:17:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:40.839 08:17:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:40.839 08:17:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:40.839 08:17:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.839 08:17:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:40.839 08:17:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:40.839 08:17:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:40.840 08:17:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.840 08:17:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:40.840 08:17:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:40.840 08:17:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:40.840 08:17:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.840 08:17:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:40.840 08:17:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:40.840 08:17:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:41.098 ************************************ 00:04:41.098 END TEST rpc_daemon_integrity 00:04:41.098 ************************************ 00:04:41.098 08:17:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:41.098 00:04:41.098 real 0m0.307s 00:04:41.098 user 0m0.200s 00:04:41.098 sys 0m0.041s 00:04:41.098 08:17:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:41.098 08:17:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:41.098 08:17:33 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:41.098 08:17:33 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:41.099 08:17:33 rpc -- rpc/rpc.sh@84 -- # killprocess 58792 00:04:41.099 08:17:33 rpc -- common/autotest_common.sh@948 -- # '[' -z 58792 ']' 00:04:41.099 08:17:33 rpc -- common/autotest_common.sh@952 -- # kill -0 58792 00:04:41.099 08:17:33 rpc -- common/autotest_common.sh@953 -- # uname 00:04:41.099 08:17:33 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:41.099 08:17:33 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 58792 00:04:41.099 killing process with pid 58792 00:04:41.099 08:17:33 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:41.099 08:17:33 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:41.099 08:17:33 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 58792' 00:04:41.099 08:17:33 rpc -- common/autotest_common.sh@967 -- # kill 58792 00:04:41.099 08:17:33 rpc -- common/autotest_common.sh@972 -- # wait 58792 00:04:41.357 00:04:41.357 real 0m2.817s 00:04:41.357 user 0m3.647s 00:04:41.357 sys 0m0.677s 00:04:41.357 08:17:33 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:41.357 ************************************ 00:04:41.357 END TEST rpc 00:04:41.357 ************************************ 00:04:41.357 08:17:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.616 08:17:33 -- common/autotest_common.sh@1142 -- # return 0 00:04:41.616 08:17:33 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:41.616 08:17:33 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:41.616 08:17:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:41.616 08:17:33 -- common/autotest_common.sh@10 -- # set +x 00:04:41.616 ************************************ 00:04:41.616 START TEST skip_rpc 00:04:41.616 ************************************ 00:04:41.616 08:17:33 skip_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:41.616 * Looking for test storage... 00:04:41.616 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:41.616 08:17:33 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:41.616 08:17:33 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:41.616 08:17:33 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:41.616 08:17:33 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:41.616 08:17:33 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:41.616 08:17:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.616 ************************************ 00:04:41.616 START TEST skip_rpc 00:04:41.616 ************************************ 00:04:41.616 08:17:33 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:04:41.616 08:17:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58979 00:04:41.616 08:17:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:41.616 08:17:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:41.616 08:17:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:41.616 [2024-07-15 08:17:33.717203] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:04:41.616 [2024-07-15 08:17:33.717298] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58979 ] 00:04:41.875 [2024-07-15 08:17:33.855033] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.875 [2024-07-15 08:17:33.994576] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.133 [2024-07-15 08:17:34.053933] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:47.397 08:17:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:47.397 08:17:38 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:04:47.397 08:17:38 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:47.397 08:17:38 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:04:47.397 08:17:38 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:47.397 08:17:38 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:04:47.397 08:17:38 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:47.397 08:17:38 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:04:47.397 08:17:38 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:47.397 08:17:38 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.397 08:17:38 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:47.397 08:17:38 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:04:47.397 08:17:38 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:47.397 08:17:38 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:47.397 08:17:38 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:47.397 08:17:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:47.397 08:17:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58979 00:04:47.397 08:17:38 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 58979 ']' 00:04:47.397 08:17:38 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 58979 00:04:47.397 08:17:38 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:04:47.397 08:17:38 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:47.397 08:17:38 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 58979 00:04:47.397 08:17:38 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:47.397 08:17:38 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:47.397 08:17:38 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 58979' 00:04:47.397 killing process with pid 58979 00:04:47.397 08:17:38 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 58979 00:04:47.397 08:17:38 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 58979 00:04:47.397 00:04:47.397 ************************************ 00:04:47.397 END TEST skip_rpc 00:04:47.397 ************************************ 00:04:47.397 real 0m5.420s 00:04:47.397 user 0m5.009s 00:04:47.397 sys 0m0.303s 00:04:47.397 08:17:39 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:47.397 08:17:39 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.397 08:17:39 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:47.397 08:17:39 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:47.397 08:17:39 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:47.397 08:17:39 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:47.397 08:17:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.397 ************************************ 00:04:47.397 START TEST skip_rpc_with_json 00:04:47.397 ************************************ 00:04:47.397 08:17:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:04:47.397 08:17:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:47.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:47.397 08:17:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=59071 00:04:47.397 08:17:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:47.398 08:17:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 59071 00:04:47.398 08:17:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:47.398 08:17:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 59071 ']' 00:04:47.398 08:17:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:47.398 08:17:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:47.398 08:17:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:47.398 08:17:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:47.398 08:17:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:47.398 [2024-07-15 08:17:39.175606] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:04:47.398 [2024-07-15 08:17:39.176521] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59071 ] 00:04:47.398 [2024-07-15 08:17:39.314522] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.398 [2024-07-15 08:17:39.430182] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.398 [2024-07-15 08:17:39.483013] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:48.333 08:17:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:48.333 08:17:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:04:48.333 08:17:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:48.333 08:17:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:48.333 08:17:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:48.333 [2024-07-15 08:17:40.143172] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:48.333 request: 00:04:48.333 { 00:04:48.333 "trtype": "tcp", 00:04:48.333 "method": "nvmf_get_transports", 00:04:48.333 "req_id": 1 00:04:48.333 } 00:04:48.333 Got JSON-RPC error response 00:04:48.333 response: 00:04:48.333 { 00:04:48.333 "code": -19, 00:04:48.333 "message": "No such device" 00:04:48.333 } 00:04:48.333 08:17:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:48.333 08:17:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:48.333 08:17:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:48.333 08:17:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:48.333 [2024-07-15 08:17:40.155323] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:48.333 08:17:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:48.333 08:17:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:48.333 08:17:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:48.333 08:17:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:48.333 08:17:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:48.333 08:17:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:48.333 { 00:04:48.333 "subsystems": [ 00:04:48.333 { 00:04:48.333 "subsystem": "keyring", 00:04:48.334 "config": [] 00:04:48.334 }, 00:04:48.334 { 00:04:48.334 "subsystem": "iobuf", 00:04:48.334 "config": [ 00:04:48.334 { 00:04:48.334 "method": "iobuf_set_options", 00:04:48.334 "params": { 00:04:48.334 "small_pool_count": 8192, 00:04:48.334 "large_pool_count": 1024, 00:04:48.334 "small_bufsize": 8192, 00:04:48.334 "large_bufsize": 135168 00:04:48.334 } 00:04:48.334 } 00:04:48.334 ] 00:04:48.334 }, 00:04:48.334 { 00:04:48.334 "subsystem": "sock", 00:04:48.334 "config": [ 00:04:48.334 { 00:04:48.334 "method": "sock_set_default_impl", 00:04:48.334 "params": { 00:04:48.334 "impl_name": "uring" 00:04:48.334 } 00:04:48.334 }, 00:04:48.334 { 00:04:48.334 "method": "sock_impl_set_options", 00:04:48.334 "params": { 00:04:48.334 "impl_name": "ssl", 00:04:48.334 "recv_buf_size": 4096, 00:04:48.334 "send_buf_size": 4096, 00:04:48.334 "enable_recv_pipe": true, 00:04:48.334 "enable_quickack": false, 00:04:48.334 "enable_placement_id": 0, 00:04:48.334 "enable_zerocopy_send_server": true, 00:04:48.334 "enable_zerocopy_send_client": false, 00:04:48.334 "zerocopy_threshold": 0, 00:04:48.334 "tls_version": 0, 00:04:48.334 "enable_ktls": false 00:04:48.334 } 00:04:48.334 }, 00:04:48.334 { 00:04:48.334 "method": "sock_impl_set_options", 00:04:48.334 "params": { 00:04:48.334 "impl_name": "posix", 00:04:48.334 "recv_buf_size": 2097152, 00:04:48.334 "send_buf_size": 2097152, 00:04:48.334 "enable_recv_pipe": true, 00:04:48.334 "enable_quickack": false, 00:04:48.334 "enable_placement_id": 0, 00:04:48.334 "enable_zerocopy_send_server": true, 00:04:48.334 "enable_zerocopy_send_client": false, 00:04:48.334 "zerocopy_threshold": 0, 00:04:48.334 "tls_version": 0, 00:04:48.334 "enable_ktls": false 00:04:48.334 } 00:04:48.334 }, 00:04:48.334 { 00:04:48.334 "method": "sock_impl_set_options", 00:04:48.334 "params": { 00:04:48.334 "impl_name": "uring", 00:04:48.334 "recv_buf_size": 2097152, 00:04:48.334 "send_buf_size": 2097152, 00:04:48.334 "enable_recv_pipe": true, 00:04:48.334 "enable_quickack": false, 00:04:48.334 "enable_placement_id": 0, 00:04:48.334 "enable_zerocopy_send_server": false, 00:04:48.334 "enable_zerocopy_send_client": false, 00:04:48.334 "zerocopy_threshold": 0, 00:04:48.334 "tls_version": 0, 00:04:48.334 "enable_ktls": false 00:04:48.334 } 00:04:48.334 } 00:04:48.334 ] 00:04:48.334 }, 00:04:48.334 { 00:04:48.334 "subsystem": "vmd", 00:04:48.334 "config": [] 00:04:48.334 }, 00:04:48.334 { 00:04:48.334 "subsystem": "accel", 00:04:48.334 "config": [ 00:04:48.334 { 00:04:48.334 "method": "accel_set_options", 00:04:48.334 "params": { 00:04:48.334 "small_cache_size": 128, 00:04:48.334 "large_cache_size": 16, 00:04:48.334 "task_count": 2048, 00:04:48.334 "sequence_count": 2048, 00:04:48.334 "buf_count": 2048 00:04:48.334 } 00:04:48.334 } 00:04:48.334 ] 00:04:48.334 }, 00:04:48.334 { 00:04:48.334 "subsystem": "bdev", 00:04:48.334 "config": [ 00:04:48.334 { 00:04:48.334 "method": "bdev_set_options", 00:04:48.334 "params": { 00:04:48.334 "bdev_io_pool_size": 65535, 00:04:48.334 "bdev_io_cache_size": 256, 00:04:48.334 "bdev_auto_examine": true, 00:04:48.334 "iobuf_small_cache_size": 128, 00:04:48.334 "iobuf_large_cache_size": 16 00:04:48.334 } 00:04:48.334 }, 00:04:48.334 { 00:04:48.334 "method": "bdev_raid_set_options", 00:04:48.334 "params": { 00:04:48.334 "process_window_size_kb": 1024 00:04:48.334 } 00:04:48.334 }, 00:04:48.334 { 00:04:48.334 "method": "bdev_iscsi_set_options", 00:04:48.334 "params": { 00:04:48.334 "timeout_sec": 30 00:04:48.334 } 00:04:48.334 }, 00:04:48.334 { 00:04:48.334 "method": "bdev_nvme_set_options", 00:04:48.334 "params": { 00:04:48.334 "action_on_timeout": "none", 00:04:48.334 "timeout_us": 0, 00:04:48.334 "timeout_admin_us": 0, 00:04:48.334 "keep_alive_timeout_ms": 10000, 00:04:48.334 "arbitration_burst": 0, 00:04:48.334 "low_priority_weight": 0, 00:04:48.334 "medium_priority_weight": 0, 00:04:48.334 "high_priority_weight": 0, 00:04:48.334 "nvme_adminq_poll_period_us": 10000, 00:04:48.334 "nvme_ioq_poll_period_us": 0, 00:04:48.334 "io_queue_requests": 0, 00:04:48.334 "delay_cmd_submit": true, 00:04:48.334 "transport_retry_count": 4, 00:04:48.334 "bdev_retry_count": 3, 00:04:48.334 "transport_ack_timeout": 0, 00:04:48.334 "ctrlr_loss_timeout_sec": 0, 00:04:48.334 "reconnect_delay_sec": 0, 00:04:48.334 "fast_io_fail_timeout_sec": 0, 00:04:48.334 "disable_auto_failback": false, 00:04:48.334 "generate_uuids": false, 00:04:48.334 "transport_tos": 0, 00:04:48.334 "nvme_error_stat": false, 00:04:48.334 "rdma_srq_size": 0, 00:04:48.334 "io_path_stat": false, 00:04:48.334 "allow_accel_sequence": false, 00:04:48.334 "rdma_max_cq_size": 0, 00:04:48.334 "rdma_cm_event_timeout_ms": 0, 00:04:48.334 "dhchap_digests": [ 00:04:48.334 "sha256", 00:04:48.334 "sha384", 00:04:48.334 "sha512" 00:04:48.334 ], 00:04:48.334 "dhchap_dhgroups": [ 00:04:48.334 "null", 00:04:48.334 "ffdhe2048", 00:04:48.334 "ffdhe3072", 00:04:48.334 "ffdhe4096", 00:04:48.334 "ffdhe6144", 00:04:48.334 "ffdhe8192" 00:04:48.334 ] 00:04:48.334 } 00:04:48.334 }, 00:04:48.334 { 00:04:48.334 "method": "bdev_nvme_set_hotplug", 00:04:48.334 "params": { 00:04:48.334 "period_us": 100000, 00:04:48.334 "enable": false 00:04:48.334 } 00:04:48.334 }, 00:04:48.334 { 00:04:48.334 "method": "bdev_wait_for_examine" 00:04:48.334 } 00:04:48.334 ] 00:04:48.334 }, 00:04:48.334 { 00:04:48.334 "subsystem": "scsi", 00:04:48.334 "config": null 00:04:48.334 }, 00:04:48.334 { 00:04:48.334 "subsystem": "scheduler", 00:04:48.334 "config": [ 00:04:48.334 { 00:04:48.334 "method": "framework_set_scheduler", 00:04:48.334 "params": { 00:04:48.334 "name": "static" 00:04:48.334 } 00:04:48.334 } 00:04:48.334 ] 00:04:48.334 }, 00:04:48.334 { 00:04:48.334 "subsystem": "vhost_scsi", 00:04:48.334 "config": [] 00:04:48.334 }, 00:04:48.334 { 00:04:48.334 "subsystem": "vhost_blk", 00:04:48.334 "config": [] 00:04:48.334 }, 00:04:48.334 { 00:04:48.334 "subsystem": "ublk", 00:04:48.334 "config": [] 00:04:48.334 }, 00:04:48.334 { 00:04:48.334 "subsystem": "nbd", 00:04:48.334 "config": [] 00:04:48.334 }, 00:04:48.335 { 00:04:48.335 "subsystem": "nvmf", 00:04:48.335 "config": [ 00:04:48.335 { 00:04:48.335 "method": "nvmf_set_config", 00:04:48.335 "params": { 00:04:48.335 "discovery_filter": "match_any", 00:04:48.335 "admin_cmd_passthru": { 00:04:48.335 "identify_ctrlr": false 00:04:48.335 } 00:04:48.335 } 00:04:48.335 }, 00:04:48.335 { 00:04:48.335 "method": "nvmf_set_max_subsystems", 00:04:48.335 "params": { 00:04:48.335 "max_subsystems": 1024 00:04:48.335 } 00:04:48.335 }, 00:04:48.335 { 00:04:48.335 "method": "nvmf_set_crdt", 00:04:48.335 "params": { 00:04:48.335 "crdt1": 0, 00:04:48.335 "crdt2": 0, 00:04:48.335 "crdt3": 0 00:04:48.335 } 00:04:48.335 }, 00:04:48.335 { 00:04:48.335 "method": "nvmf_create_transport", 00:04:48.335 "params": { 00:04:48.335 "trtype": "TCP", 00:04:48.335 "max_queue_depth": 128, 00:04:48.335 "max_io_qpairs_per_ctrlr": 127, 00:04:48.335 "in_capsule_data_size": 4096, 00:04:48.335 "max_io_size": 131072, 00:04:48.335 "io_unit_size": 131072, 00:04:48.335 "max_aq_depth": 128, 00:04:48.335 "num_shared_buffers": 511, 00:04:48.335 "buf_cache_size": 4294967295, 00:04:48.335 "dif_insert_or_strip": false, 00:04:48.335 "zcopy": false, 00:04:48.335 "c2h_success": true, 00:04:48.335 "sock_priority": 0, 00:04:48.335 "abort_timeout_sec": 1, 00:04:48.335 "ack_timeout": 0, 00:04:48.335 "data_wr_pool_size": 0 00:04:48.335 } 00:04:48.335 } 00:04:48.335 ] 00:04:48.335 }, 00:04:48.335 { 00:04:48.335 "subsystem": "iscsi", 00:04:48.335 "config": [ 00:04:48.335 { 00:04:48.335 "method": "iscsi_set_options", 00:04:48.335 "params": { 00:04:48.335 "node_base": "iqn.2016-06.io.spdk", 00:04:48.335 "max_sessions": 128, 00:04:48.335 "max_connections_per_session": 2, 00:04:48.335 "max_queue_depth": 64, 00:04:48.335 "default_time2wait": 2, 00:04:48.335 "default_time2retain": 20, 00:04:48.335 "first_burst_length": 8192, 00:04:48.335 "immediate_data": true, 00:04:48.335 "allow_duplicated_isid": false, 00:04:48.335 "error_recovery_level": 0, 00:04:48.335 "nop_timeout": 60, 00:04:48.335 "nop_in_interval": 30, 00:04:48.335 "disable_chap": false, 00:04:48.335 "require_chap": false, 00:04:48.335 "mutual_chap": false, 00:04:48.335 "chap_group": 0, 00:04:48.335 "max_large_datain_per_connection": 64, 00:04:48.335 "max_r2t_per_connection": 4, 00:04:48.335 "pdu_pool_size": 36864, 00:04:48.335 "immediate_data_pool_size": 16384, 00:04:48.335 "data_out_pool_size": 2048 00:04:48.335 } 00:04:48.335 } 00:04:48.335 ] 00:04:48.335 } 00:04:48.335 ] 00:04:48.335 } 00:04:48.335 08:17:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:48.335 08:17:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 59071 00:04:48.335 08:17:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 59071 ']' 00:04:48.335 08:17:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 59071 00:04:48.335 08:17:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:48.335 08:17:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:48.335 08:17:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59071 00:04:48.335 08:17:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:48.335 08:17:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:48.335 killing process with pid 59071 00:04:48.335 08:17:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59071' 00:04:48.335 08:17:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 59071 00:04:48.335 08:17:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 59071 00:04:48.596 08:17:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=59093 00:04:48.596 08:17:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:48.596 08:17:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:53.857 08:17:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 59093 00:04:53.857 08:17:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 59093 ']' 00:04:53.857 08:17:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 59093 00:04:53.857 08:17:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:53.857 08:17:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:53.857 08:17:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59093 00:04:53.857 killing process with pid 59093 00:04:53.857 08:17:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:53.857 08:17:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:53.857 08:17:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59093' 00:04:53.857 08:17:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 59093 00:04:53.857 08:17:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 59093 00:04:54.116 08:17:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:54.116 08:17:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:54.116 00:04:54.116 real 0m7.068s 00:04:54.116 user 0m6.825s 00:04:54.116 sys 0m0.601s 00:04:54.116 08:17:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:54.116 08:17:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:54.116 ************************************ 00:04:54.116 END TEST skip_rpc_with_json 00:04:54.116 ************************************ 00:04:54.116 08:17:46 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:54.116 08:17:46 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:54.116 08:17:46 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:54.116 08:17:46 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:54.116 08:17:46 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:54.116 ************************************ 00:04:54.116 START TEST skip_rpc_with_delay 00:04:54.116 ************************************ 00:04:54.116 08:17:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:04:54.116 08:17:46 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:54.116 08:17:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:04:54.116 08:17:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:54.116 08:17:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:54.116 08:17:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:54.116 08:17:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:54.116 08:17:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:54.116 08:17:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:54.116 08:17:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:54.116 08:17:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:54.116 08:17:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:54.116 08:17:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:54.375 [2024-07-15 08:17:46.299670] app.c: 831:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:54.375 [2024-07-15 08:17:46.299852] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:54.375 ************************************ 00:04:54.375 END TEST skip_rpc_with_delay 00:04:54.375 ************************************ 00:04:54.375 08:17:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:04:54.375 08:17:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:54.375 08:17:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:54.375 08:17:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:54.375 00:04:54.375 real 0m0.089s 00:04:54.375 user 0m0.055s 00:04:54.375 sys 0m0.032s 00:04:54.375 08:17:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:54.375 08:17:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:54.375 08:17:46 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:54.375 08:17:46 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:54.375 08:17:46 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:54.375 08:17:46 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:54.375 08:17:46 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:54.375 08:17:46 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:54.375 08:17:46 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:54.375 ************************************ 00:04:54.375 START TEST exit_on_failed_rpc_init 00:04:54.375 ************************************ 00:04:54.375 08:17:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:04:54.375 08:17:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=59208 00:04:54.375 08:17:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:54.375 08:17:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 59208 00:04:54.375 08:17:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 59208 ']' 00:04:54.375 08:17:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:54.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:54.375 08:17:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:54.375 08:17:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:54.375 08:17:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:54.375 08:17:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:54.375 [2024-07-15 08:17:46.443689] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:04:54.375 [2024-07-15 08:17:46.443840] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59208 ] 00:04:54.632 [2024-07-15 08:17:46.582801] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.632 [2024-07-15 08:17:46.704254] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.633 [2024-07-15 08:17:46.757763] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:55.572 08:17:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:55.572 08:17:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:04:55.572 08:17:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:55.572 08:17:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:55.572 08:17:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:04:55.572 08:17:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:55.572 08:17:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:55.572 08:17:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:55.572 08:17:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:55.572 08:17:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:55.572 08:17:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:55.572 08:17:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:55.572 08:17:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:55.572 08:17:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:55.572 08:17:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:55.572 [2024-07-15 08:17:47.571652] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:04:55.572 [2024-07-15 08:17:47.571801] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59226 ] 00:04:55.572 [2024-07-15 08:17:47.711485] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.830 [2024-07-15 08:17:47.844215] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:55.830 [2024-07-15 08:17:47.844334] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:55.830 [2024-07-15 08:17:47.844352] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:55.830 [2024-07-15 08:17:47.844362] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:55.830 08:17:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:04:55.830 08:17:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:55.830 08:17:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:04:55.830 08:17:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:04:55.830 08:17:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:04:55.830 08:17:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:55.830 08:17:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:55.830 08:17:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 59208 00:04:55.830 08:17:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 59208 ']' 00:04:55.830 08:17:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 59208 00:04:55.830 08:17:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:04:55.830 08:17:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:55.830 08:17:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59208 00:04:55.830 killing process with pid 59208 00:04:55.830 08:17:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:55.830 08:17:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:55.830 08:17:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59208' 00:04:55.830 08:17:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 59208 00:04:55.830 08:17:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 59208 00:04:56.397 ************************************ 00:04:56.397 END TEST exit_on_failed_rpc_init 00:04:56.397 ************************************ 00:04:56.397 00:04:56.397 real 0m2.003s 00:04:56.397 user 0m2.440s 00:04:56.397 sys 0m0.436s 00:04:56.397 08:17:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:56.397 08:17:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:56.397 08:17:48 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:56.397 08:17:48 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:56.397 00:04:56.397 real 0m14.855s 00:04:56.397 user 0m14.411s 00:04:56.397 sys 0m1.548s 00:04:56.397 ************************************ 00:04:56.397 END TEST skip_rpc 00:04:56.397 ************************************ 00:04:56.397 08:17:48 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:56.397 08:17:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:56.397 08:17:48 -- common/autotest_common.sh@1142 -- # return 0 00:04:56.397 08:17:48 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:56.397 08:17:48 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:56.397 08:17:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:56.397 08:17:48 -- common/autotest_common.sh@10 -- # set +x 00:04:56.397 ************************************ 00:04:56.397 START TEST rpc_client 00:04:56.397 ************************************ 00:04:56.397 08:17:48 rpc_client -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:56.397 * Looking for test storage... 00:04:56.397 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:56.397 08:17:48 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:56.397 OK 00:04:56.397 08:17:48 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:56.397 00:04:56.397 real 0m0.103s 00:04:56.397 user 0m0.040s 00:04:56.397 sys 0m0.067s 00:04:56.397 ************************************ 00:04:56.397 END TEST rpc_client 00:04:56.397 ************************************ 00:04:56.398 08:17:48 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:56.398 08:17:48 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:56.657 08:17:48 -- common/autotest_common.sh@1142 -- # return 0 00:04:56.657 08:17:48 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:56.657 08:17:48 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:56.657 08:17:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:56.657 08:17:48 -- common/autotest_common.sh@10 -- # set +x 00:04:56.657 ************************************ 00:04:56.657 START TEST json_config 00:04:56.657 ************************************ 00:04:56.657 08:17:48 json_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:56.657 08:17:48 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:56.657 08:17:48 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:56.657 08:17:48 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:56.657 08:17:48 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:56.657 08:17:48 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:56.657 08:17:48 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:56.657 08:17:48 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:56.657 08:17:48 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:56.657 08:17:48 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:56.657 08:17:48 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:56.657 08:17:48 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:56.657 08:17:48 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:56.657 08:17:48 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:04:56.657 08:17:48 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:04:56.657 08:17:48 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:56.657 08:17:48 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:56.657 08:17:48 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:56.657 08:17:48 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:56.657 08:17:48 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:56.657 08:17:48 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:56.657 08:17:48 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:56.657 08:17:48 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:56.657 08:17:48 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.657 08:17:48 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.657 08:17:48 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.657 08:17:48 json_config -- paths/export.sh@5 -- # export PATH 00:04:56.657 08:17:48 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.657 08:17:48 json_config -- nvmf/common.sh@47 -- # : 0 00:04:56.657 08:17:48 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:56.657 08:17:48 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:56.657 08:17:48 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:56.657 08:17:48 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:56.657 08:17:48 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:56.657 08:17:48 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:56.657 08:17:48 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:56.657 08:17:48 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:56.657 08:17:48 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:56.657 08:17:48 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:56.657 08:17:48 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:56.657 08:17:48 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:56.657 08:17:48 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:56.657 08:17:48 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:56.657 08:17:48 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:56.657 08:17:48 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:56.657 08:17:48 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:56.657 08:17:48 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:56.657 08:17:48 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:56.657 08:17:48 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:04:56.657 08:17:48 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:56.657 INFO: JSON configuration test init 00:04:56.657 08:17:48 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:56.657 08:17:48 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:56.657 08:17:48 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:04:56.657 08:17:48 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:04:56.657 08:17:48 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:04:56.657 08:17:48 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:56.657 08:17:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:56.657 08:17:48 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:04:56.657 08:17:48 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:56.657 08:17:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:56.657 08:17:48 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:04:56.657 08:17:48 json_config -- json_config/common.sh@9 -- # local app=target 00:04:56.657 08:17:48 json_config -- json_config/common.sh@10 -- # shift 00:04:56.657 08:17:48 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:56.657 08:17:48 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:56.657 08:17:48 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:56.657 Waiting for target to run... 00:04:56.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:56.657 08:17:48 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:56.657 08:17:48 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:56.657 08:17:48 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59344 00:04:56.657 08:17:48 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:56.657 08:17:48 json_config -- json_config/common.sh@25 -- # waitforlisten 59344 /var/tmp/spdk_tgt.sock 00:04:56.657 08:17:48 json_config -- common/autotest_common.sh@829 -- # '[' -z 59344 ']' 00:04:56.657 08:17:48 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:56.657 08:17:48 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:56.657 08:17:48 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:56.657 08:17:48 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:56.658 08:17:48 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:56.658 08:17:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:56.658 [2024-07-15 08:17:48.775285] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:04:56.658 [2024-07-15 08:17:48.775659] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59344 ] 00:04:57.224 [2024-07-15 08:17:49.195357] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.224 [2024-07-15 08:17:49.296273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.791 08:17:49 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:57.791 08:17:49 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:57.791 08:17:49 json_config -- json_config/common.sh@26 -- # echo '' 00:04:57.791 00:04:57.791 08:17:49 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:04:57.791 08:17:49 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:04:57.791 08:17:49 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:57.791 08:17:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:57.791 08:17:49 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:04:57.791 08:17:49 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:04:57.791 08:17:49 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:57.791 08:17:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:57.791 08:17:49 json_config -- json_config/json_config.sh@273 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:57.791 08:17:49 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:04:57.791 08:17:49 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:58.049 [2024-07-15 08:17:50.013079] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:58.307 08:17:50 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:04:58.307 08:17:50 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:58.307 08:17:50 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:58.307 08:17:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:58.307 08:17:50 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:58.307 08:17:50 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:58.307 08:17:50 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:58.307 08:17:50 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:58.307 08:17:50 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:58.307 08:17:50 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:58.566 08:17:50 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:58.566 08:17:50 json_config -- json_config/json_config.sh@48 -- # local get_types 00:04:58.566 08:17:50 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:58.566 08:17:50 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:04:58.566 08:17:50 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:58.566 08:17:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:58.566 08:17:50 json_config -- json_config/json_config.sh@55 -- # return 0 00:04:58.566 08:17:50 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:04:58.566 08:17:50 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:58.566 08:17:50 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:58.566 08:17:50 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:04:58.566 08:17:50 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:04:58.566 08:17:50 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:04:58.566 08:17:50 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:58.566 08:17:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:58.566 08:17:50 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:58.566 08:17:50 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:04:58.566 08:17:50 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:04:58.566 08:17:50 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:58.566 08:17:50 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:58.824 MallocForNvmf0 00:04:58.824 08:17:50 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:58.824 08:17:50 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:59.082 MallocForNvmf1 00:04:59.082 08:17:51 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:59.082 08:17:51 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:59.342 [2024-07-15 08:17:51.349339] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:59.342 08:17:51 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:59.342 08:17:51 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:59.602 08:17:51 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:59.602 08:17:51 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:59.860 08:17:51 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:59.860 08:17:51 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:00.118 08:17:52 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:00.118 08:17:52 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:00.376 [2024-07-15 08:17:52.337921] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:00.376 08:17:52 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:00.376 08:17:52 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:00.376 08:17:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:00.376 08:17:52 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:00.376 08:17:52 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:00.376 08:17:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:00.376 08:17:52 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:00.376 08:17:52 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:00.376 08:17:52 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:00.634 MallocBdevForConfigChangeCheck 00:05:00.634 08:17:52 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:00.634 08:17:52 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:00.634 08:17:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:00.634 08:17:52 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:00.634 08:17:52 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:01.200 INFO: shutting down applications... 00:05:01.200 08:17:53 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:01.200 08:17:53 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:01.200 08:17:53 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:01.200 08:17:53 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:01.200 08:17:53 json_config -- json_config/json_config.sh@333 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:01.458 Calling clear_iscsi_subsystem 00:05:01.458 Calling clear_nvmf_subsystem 00:05:01.458 Calling clear_nbd_subsystem 00:05:01.458 Calling clear_ublk_subsystem 00:05:01.458 Calling clear_vhost_blk_subsystem 00:05:01.458 Calling clear_vhost_scsi_subsystem 00:05:01.458 Calling clear_bdev_subsystem 00:05:01.458 08:17:53 json_config -- json_config/json_config.sh@337 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:05:01.458 08:17:53 json_config -- json_config/json_config.sh@343 -- # count=100 00:05:01.458 08:17:53 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:01.458 08:17:53 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:01.459 08:17:53 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:01.459 08:17:53 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:05:01.716 08:17:53 json_config -- json_config/json_config.sh@345 -- # break 00:05:01.716 08:17:53 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:01.716 08:17:53 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:01.716 08:17:53 json_config -- json_config/common.sh@31 -- # local app=target 00:05:01.716 08:17:53 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:01.716 08:17:53 json_config -- json_config/common.sh@35 -- # [[ -n 59344 ]] 00:05:01.716 08:17:53 json_config -- json_config/common.sh@38 -- # kill -SIGINT 59344 00:05:01.716 08:17:53 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:01.716 08:17:53 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:01.716 08:17:53 json_config -- json_config/common.sh@41 -- # kill -0 59344 00:05:01.716 08:17:53 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:02.283 08:17:54 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:02.283 08:17:54 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:02.283 SPDK target shutdown done 00:05:02.283 INFO: relaunching applications... 00:05:02.283 08:17:54 json_config -- json_config/common.sh@41 -- # kill -0 59344 00:05:02.283 08:17:54 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:02.283 08:17:54 json_config -- json_config/common.sh@43 -- # break 00:05:02.283 08:17:54 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:02.283 08:17:54 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:02.283 08:17:54 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:02.283 08:17:54 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:02.283 08:17:54 json_config -- json_config/common.sh@9 -- # local app=target 00:05:02.283 08:17:54 json_config -- json_config/common.sh@10 -- # shift 00:05:02.283 08:17:54 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:02.283 08:17:54 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:02.283 08:17:54 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:02.283 08:17:54 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:02.283 08:17:54 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:02.283 08:17:54 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59535 00:05:02.283 08:17:54 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:02.283 08:17:54 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:02.283 Waiting for target to run... 00:05:02.283 08:17:54 json_config -- json_config/common.sh@25 -- # waitforlisten 59535 /var/tmp/spdk_tgt.sock 00:05:02.283 08:17:54 json_config -- common/autotest_common.sh@829 -- # '[' -z 59535 ']' 00:05:02.283 08:17:54 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:02.283 08:17:54 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:02.283 08:17:54 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:02.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:02.283 08:17:54 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:02.283 08:17:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:02.542 [2024-07-15 08:17:54.458321] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:02.542 [2024-07-15 08:17:54.458714] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59535 ] 00:05:02.800 [2024-07-15 08:17:54.934239] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.059 [2024-07-15 08:17:55.047754] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.059 [2024-07-15 08:17:55.176801] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:03.318 [2024-07-15 08:17:55.386489] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:03.318 [2024-07-15 08:17:55.418570] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:03.318 00:05:03.318 INFO: Checking if target configuration is the same... 00:05:03.318 08:17:55 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:03.318 08:17:55 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:03.318 08:17:55 json_config -- json_config/common.sh@26 -- # echo '' 00:05:03.318 08:17:55 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:03.318 08:17:55 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:03.318 08:17:55 json_config -- json_config/json_config.sh@378 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:03.318 08:17:55 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:03.318 08:17:55 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:03.318 + '[' 2 -ne 2 ']' 00:05:03.318 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:03.318 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:03.318 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:03.318 +++ basename /dev/fd/62 00:05:03.318 ++ mktemp /tmp/62.XXX 00:05:03.318 + tmp_file_1=/tmp/62.GLZ 00:05:03.318 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:03.318 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:03.318 + tmp_file_2=/tmp/spdk_tgt_config.json.NxP 00:05:03.318 + ret=0 00:05:03.318 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:03.886 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:03.886 + diff -u /tmp/62.GLZ /tmp/spdk_tgt_config.json.NxP 00:05:03.886 INFO: JSON config files are the same 00:05:03.886 + echo 'INFO: JSON config files are the same' 00:05:03.886 + rm /tmp/62.GLZ /tmp/spdk_tgt_config.json.NxP 00:05:03.886 + exit 0 00:05:03.886 INFO: changing configuration and checking if this can be detected... 00:05:03.886 08:17:55 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:03.886 08:17:55 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:03.886 08:17:55 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:03.886 08:17:55 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:04.144 08:17:56 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:04.144 08:17:56 json_config -- json_config/json_config.sh@387 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:04.144 08:17:56 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:04.144 + '[' 2 -ne 2 ']' 00:05:04.144 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:04.144 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:04.144 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:04.144 +++ basename /dev/fd/62 00:05:04.144 ++ mktemp /tmp/62.XXX 00:05:04.144 + tmp_file_1=/tmp/62.VSQ 00:05:04.144 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:04.144 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:04.144 + tmp_file_2=/tmp/spdk_tgt_config.json.4Lq 00:05:04.144 + ret=0 00:05:04.144 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:04.403 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:04.661 + diff -u /tmp/62.VSQ /tmp/spdk_tgt_config.json.4Lq 00:05:04.661 + ret=1 00:05:04.661 + echo '=== Start of file: /tmp/62.VSQ ===' 00:05:04.661 + cat /tmp/62.VSQ 00:05:04.661 + echo '=== End of file: /tmp/62.VSQ ===' 00:05:04.661 + echo '' 00:05:04.661 + echo '=== Start of file: /tmp/spdk_tgt_config.json.4Lq ===' 00:05:04.661 + cat /tmp/spdk_tgt_config.json.4Lq 00:05:04.661 + echo '=== End of file: /tmp/spdk_tgt_config.json.4Lq ===' 00:05:04.661 + echo '' 00:05:04.661 + rm /tmp/62.VSQ /tmp/spdk_tgt_config.json.4Lq 00:05:04.661 + exit 1 00:05:04.661 INFO: configuration change detected. 00:05:04.661 08:17:56 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:04.661 08:17:56 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:04.661 08:17:56 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:04.661 08:17:56 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:04.661 08:17:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:04.661 08:17:56 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:05:04.661 08:17:56 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:04.661 08:17:56 json_config -- json_config/json_config.sh@317 -- # [[ -n 59535 ]] 00:05:04.661 08:17:56 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:04.661 08:17:56 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:04.661 08:17:56 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:04.661 08:17:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:04.661 08:17:56 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:04.661 08:17:56 json_config -- json_config/json_config.sh@193 -- # uname -s 00:05:04.661 08:17:56 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:04.661 08:17:56 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:04.661 08:17:56 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:04.661 08:17:56 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:04.661 08:17:56 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:04.661 08:17:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:04.661 08:17:56 json_config -- json_config/json_config.sh@323 -- # killprocess 59535 00:05:04.661 08:17:56 json_config -- common/autotest_common.sh@948 -- # '[' -z 59535 ']' 00:05:04.661 08:17:56 json_config -- common/autotest_common.sh@952 -- # kill -0 59535 00:05:04.661 08:17:56 json_config -- common/autotest_common.sh@953 -- # uname 00:05:04.661 08:17:56 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:04.661 08:17:56 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59535 00:05:04.661 killing process with pid 59535 00:05:04.661 08:17:56 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:04.661 08:17:56 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:04.661 08:17:56 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59535' 00:05:04.661 08:17:56 json_config -- common/autotest_common.sh@967 -- # kill 59535 00:05:04.661 08:17:56 json_config -- common/autotest_common.sh@972 -- # wait 59535 00:05:04.920 08:17:56 json_config -- json_config/json_config.sh@326 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:04.920 08:17:56 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:04.920 08:17:56 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:04.920 08:17:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:04.920 08:17:56 json_config -- json_config/json_config.sh@328 -- # return 0 00:05:04.920 INFO: Success 00:05:04.920 08:17:56 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:04.920 ************************************ 00:05:04.920 END TEST json_config 00:05:04.920 ************************************ 00:05:04.920 00:05:04.920 real 0m8.371s 00:05:04.920 user 0m11.938s 00:05:04.920 sys 0m1.762s 00:05:04.920 08:17:56 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:04.920 08:17:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:04.920 08:17:57 -- common/autotest_common.sh@1142 -- # return 0 00:05:04.920 08:17:57 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:04.920 08:17:57 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:04.920 08:17:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:04.920 08:17:57 -- common/autotest_common.sh@10 -- # set +x 00:05:04.920 ************************************ 00:05:04.920 START TEST json_config_extra_key 00:05:04.920 ************************************ 00:05:04.920 08:17:57 json_config_extra_key -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:05.179 08:17:57 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:05.179 08:17:57 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:05.179 08:17:57 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:05.179 08:17:57 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:05.179 08:17:57 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:05.179 08:17:57 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:05.179 08:17:57 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:05.179 08:17:57 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:05.179 08:17:57 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:05.179 08:17:57 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:05.179 08:17:57 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:05.179 08:17:57 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:05.179 08:17:57 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:05:05.179 08:17:57 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:05:05.179 08:17:57 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:05.179 08:17:57 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:05.179 08:17:57 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:05.179 08:17:57 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:05.179 08:17:57 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:05.179 08:17:57 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:05.179 08:17:57 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:05.179 08:17:57 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:05.179 08:17:57 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:05.179 08:17:57 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:05.179 08:17:57 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:05.179 08:17:57 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:05.179 08:17:57 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:05.179 08:17:57 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:05.179 08:17:57 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:05.179 08:17:57 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:05.179 08:17:57 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:05.179 08:17:57 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:05.179 08:17:57 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:05.179 08:17:57 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:05.179 08:17:57 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:05.179 08:17:57 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:05.179 08:17:57 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:05.179 08:17:57 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:05.179 INFO: launching applications... 00:05:05.179 08:17:57 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:05.179 08:17:57 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:05.179 08:17:57 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:05.179 08:17:57 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:05.179 08:17:57 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:05.179 08:17:57 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:05.180 08:17:57 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:05.180 08:17:57 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:05.180 08:17:57 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:05.180 08:17:57 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:05.180 08:17:57 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:05.180 08:17:57 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:05.180 08:17:57 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:05.180 08:17:57 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:05.180 08:17:57 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:05.180 08:17:57 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:05.180 08:17:57 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:05.180 Waiting for target to run... 00:05:05.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:05.180 08:17:57 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=59675 00:05:05.180 08:17:57 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:05.180 08:17:57 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:05.180 08:17:57 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 59675 /var/tmp/spdk_tgt.sock 00:05:05.180 08:17:57 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 59675 ']' 00:05:05.180 08:17:57 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:05.180 08:17:57 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:05.180 08:17:57 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:05.180 08:17:57 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:05.180 08:17:57 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:05.180 [2024-07-15 08:17:57.197336] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:05.180 [2024-07-15 08:17:57.197734] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59675 ] 00:05:05.745 [2024-07-15 08:17:57.629984] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.745 [2024-07-15 08:17:57.738680] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.745 [2024-07-15 08:17:57.761085] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:06.312 08:17:58 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:06.312 08:17:58 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:05:06.312 08:17:58 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:06.312 00:05:06.312 INFO: shutting down applications... 00:05:06.312 08:17:58 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:06.312 08:17:58 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:06.312 08:17:58 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:06.312 08:17:58 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:06.312 08:17:58 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 59675 ]] 00:05:06.312 08:17:58 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 59675 00:05:06.312 08:17:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:06.312 08:17:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:06.312 08:17:58 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59675 00:05:06.312 08:17:58 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:06.569 08:17:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:06.569 08:17:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:06.569 08:17:58 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59675 00:05:06.569 08:17:58 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:07.134 08:17:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:07.134 08:17:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:07.134 08:17:59 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59675 00:05:07.134 SPDK target shutdown done 00:05:07.134 Success 00:05:07.134 08:17:59 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:07.134 08:17:59 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:07.134 08:17:59 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:07.134 08:17:59 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:07.134 08:17:59 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:07.134 00:05:07.134 real 0m2.169s 00:05:07.134 user 0m1.806s 00:05:07.134 sys 0m0.445s 00:05:07.134 ************************************ 00:05:07.134 END TEST json_config_extra_key 00:05:07.134 ************************************ 00:05:07.134 08:17:59 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:07.134 08:17:59 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:07.134 08:17:59 -- common/autotest_common.sh@1142 -- # return 0 00:05:07.134 08:17:59 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:07.134 08:17:59 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:07.134 08:17:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.134 08:17:59 -- common/autotest_common.sh@10 -- # set +x 00:05:07.134 ************************************ 00:05:07.134 START TEST alias_rpc 00:05:07.134 ************************************ 00:05:07.134 08:17:59 alias_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:07.393 * Looking for test storage... 00:05:07.393 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:07.393 08:17:59 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:07.393 08:17:59 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=59752 00:05:07.393 08:17:59 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 59752 00:05:07.393 08:17:59 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:07.393 08:17:59 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 59752 ']' 00:05:07.393 08:17:59 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:07.393 08:17:59 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:07.393 08:17:59 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:07.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:07.393 08:17:59 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:07.393 08:17:59 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.393 [2024-07-15 08:17:59.405326] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:07.393 [2024-07-15 08:17:59.405453] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59752 ] 00:05:07.393 [2024-07-15 08:17:59.548262] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.650 [2024-07-15 08:17:59.715040] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.650 [2024-07-15 08:17:59.796424] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:08.592 08:18:00 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:08.592 08:18:00 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:08.592 08:18:00 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:08.592 08:18:00 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 59752 00:05:08.592 08:18:00 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 59752 ']' 00:05:08.592 08:18:00 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 59752 00:05:08.592 08:18:00 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:05:08.592 08:18:00 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:08.592 08:18:00 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59752 00:05:08.849 08:18:00 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:08.849 killing process with pid 59752 00:05:08.849 08:18:00 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:08.849 08:18:00 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59752' 00:05:08.849 08:18:00 alias_rpc -- common/autotest_common.sh@967 -- # kill 59752 00:05:08.849 08:18:00 alias_rpc -- common/autotest_common.sh@972 -- # wait 59752 00:05:09.416 ************************************ 00:05:09.416 END TEST alias_rpc 00:05:09.416 ************************************ 00:05:09.416 00:05:09.416 real 0m2.069s 00:05:09.416 user 0m2.289s 00:05:09.416 sys 0m0.558s 00:05:09.416 08:18:01 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:09.416 08:18:01 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.416 08:18:01 -- common/autotest_common.sh@1142 -- # return 0 00:05:09.416 08:18:01 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:09.416 08:18:01 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:09.416 08:18:01 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:09.416 08:18:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.416 08:18:01 -- common/autotest_common.sh@10 -- # set +x 00:05:09.416 ************************************ 00:05:09.416 START TEST spdkcli_tcp 00:05:09.416 ************************************ 00:05:09.416 08:18:01 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:09.416 * Looking for test storage... 00:05:09.416 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:09.416 08:18:01 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:09.416 08:18:01 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:09.416 08:18:01 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:09.416 08:18:01 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:09.416 08:18:01 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:09.416 08:18:01 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:09.416 08:18:01 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:09.416 08:18:01 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:09.416 08:18:01 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:09.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.416 08:18:01 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=59828 00:05:09.416 08:18:01 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:09.416 08:18:01 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 59828 00:05:09.416 08:18:01 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 59828 ']' 00:05:09.416 08:18:01 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.416 08:18:01 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:09.416 08:18:01 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.416 08:18:01 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:09.416 08:18:01 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:09.416 [2024-07-15 08:18:01.506758] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:09.416 [2024-07-15 08:18:01.507563] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59828 ] 00:05:09.675 [2024-07-15 08:18:01.642309] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:09.675 [2024-07-15 08:18:01.765406] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:09.675 [2024-07-15 08:18:01.765419] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.675 [2024-07-15 08:18:01.821964] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:10.609 08:18:02 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:10.609 08:18:02 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:05:10.609 08:18:02 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=59845 00:05:10.609 08:18:02 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:10.609 08:18:02 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:10.609 [ 00:05:10.609 "bdev_malloc_delete", 00:05:10.609 "bdev_malloc_create", 00:05:10.609 "bdev_null_resize", 00:05:10.609 "bdev_null_delete", 00:05:10.609 "bdev_null_create", 00:05:10.609 "bdev_nvme_cuse_unregister", 00:05:10.609 "bdev_nvme_cuse_register", 00:05:10.609 "bdev_opal_new_user", 00:05:10.609 "bdev_opal_set_lock_state", 00:05:10.609 "bdev_opal_delete", 00:05:10.609 "bdev_opal_get_info", 00:05:10.609 "bdev_opal_create", 00:05:10.609 "bdev_nvme_opal_revert", 00:05:10.609 "bdev_nvme_opal_init", 00:05:10.609 "bdev_nvme_send_cmd", 00:05:10.609 "bdev_nvme_get_path_iostat", 00:05:10.609 "bdev_nvme_get_mdns_discovery_info", 00:05:10.609 "bdev_nvme_stop_mdns_discovery", 00:05:10.609 "bdev_nvme_start_mdns_discovery", 00:05:10.609 "bdev_nvme_set_multipath_policy", 00:05:10.609 "bdev_nvme_set_preferred_path", 00:05:10.609 "bdev_nvme_get_io_paths", 00:05:10.609 "bdev_nvme_remove_error_injection", 00:05:10.609 "bdev_nvme_add_error_injection", 00:05:10.609 "bdev_nvme_get_discovery_info", 00:05:10.609 "bdev_nvme_stop_discovery", 00:05:10.609 "bdev_nvme_start_discovery", 00:05:10.609 "bdev_nvme_get_controller_health_info", 00:05:10.609 "bdev_nvme_disable_controller", 00:05:10.609 "bdev_nvme_enable_controller", 00:05:10.609 "bdev_nvme_reset_controller", 00:05:10.609 "bdev_nvme_get_transport_statistics", 00:05:10.609 "bdev_nvme_apply_firmware", 00:05:10.609 "bdev_nvme_detach_controller", 00:05:10.609 "bdev_nvme_get_controllers", 00:05:10.609 "bdev_nvme_attach_controller", 00:05:10.609 "bdev_nvme_set_hotplug", 00:05:10.609 "bdev_nvme_set_options", 00:05:10.609 "bdev_passthru_delete", 00:05:10.609 "bdev_passthru_create", 00:05:10.609 "bdev_lvol_set_parent_bdev", 00:05:10.609 "bdev_lvol_set_parent", 00:05:10.609 "bdev_lvol_check_shallow_copy", 00:05:10.609 "bdev_lvol_start_shallow_copy", 00:05:10.609 "bdev_lvol_grow_lvstore", 00:05:10.609 "bdev_lvol_get_lvols", 00:05:10.609 "bdev_lvol_get_lvstores", 00:05:10.609 "bdev_lvol_delete", 00:05:10.609 "bdev_lvol_set_read_only", 00:05:10.609 "bdev_lvol_resize", 00:05:10.609 "bdev_lvol_decouple_parent", 00:05:10.609 "bdev_lvol_inflate", 00:05:10.609 "bdev_lvol_rename", 00:05:10.609 "bdev_lvol_clone_bdev", 00:05:10.609 "bdev_lvol_clone", 00:05:10.609 "bdev_lvol_snapshot", 00:05:10.609 "bdev_lvol_create", 00:05:10.609 "bdev_lvol_delete_lvstore", 00:05:10.609 "bdev_lvol_rename_lvstore", 00:05:10.609 "bdev_lvol_create_lvstore", 00:05:10.609 "bdev_raid_set_options", 00:05:10.609 "bdev_raid_remove_base_bdev", 00:05:10.609 "bdev_raid_add_base_bdev", 00:05:10.609 "bdev_raid_delete", 00:05:10.609 "bdev_raid_create", 00:05:10.609 "bdev_raid_get_bdevs", 00:05:10.609 "bdev_error_inject_error", 00:05:10.609 "bdev_error_delete", 00:05:10.609 "bdev_error_create", 00:05:10.609 "bdev_split_delete", 00:05:10.609 "bdev_split_create", 00:05:10.609 "bdev_delay_delete", 00:05:10.609 "bdev_delay_create", 00:05:10.609 "bdev_delay_update_latency", 00:05:10.609 "bdev_zone_block_delete", 00:05:10.609 "bdev_zone_block_create", 00:05:10.609 "blobfs_create", 00:05:10.609 "blobfs_detect", 00:05:10.609 "blobfs_set_cache_size", 00:05:10.609 "bdev_aio_delete", 00:05:10.609 "bdev_aio_rescan", 00:05:10.609 "bdev_aio_create", 00:05:10.609 "bdev_ftl_set_property", 00:05:10.609 "bdev_ftl_get_properties", 00:05:10.609 "bdev_ftl_get_stats", 00:05:10.609 "bdev_ftl_unmap", 00:05:10.609 "bdev_ftl_unload", 00:05:10.609 "bdev_ftl_delete", 00:05:10.609 "bdev_ftl_load", 00:05:10.609 "bdev_ftl_create", 00:05:10.609 "bdev_virtio_attach_controller", 00:05:10.609 "bdev_virtio_scsi_get_devices", 00:05:10.609 "bdev_virtio_detach_controller", 00:05:10.609 "bdev_virtio_blk_set_hotplug", 00:05:10.609 "bdev_iscsi_delete", 00:05:10.609 "bdev_iscsi_create", 00:05:10.609 "bdev_iscsi_set_options", 00:05:10.609 "bdev_uring_delete", 00:05:10.609 "bdev_uring_rescan", 00:05:10.609 "bdev_uring_create", 00:05:10.609 "accel_error_inject_error", 00:05:10.609 "ioat_scan_accel_module", 00:05:10.609 "dsa_scan_accel_module", 00:05:10.609 "iaa_scan_accel_module", 00:05:10.609 "keyring_file_remove_key", 00:05:10.609 "keyring_file_add_key", 00:05:10.609 "keyring_linux_set_options", 00:05:10.609 "iscsi_get_histogram", 00:05:10.609 "iscsi_enable_histogram", 00:05:10.609 "iscsi_set_options", 00:05:10.609 "iscsi_get_auth_groups", 00:05:10.609 "iscsi_auth_group_remove_secret", 00:05:10.609 "iscsi_auth_group_add_secret", 00:05:10.609 "iscsi_delete_auth_group", 00:05:10.609 "iscsi_create_auth_group", 00:05:10.609 "iscsi_set_discovery_auth", 00:05:10.609 "iscsi_get_options", 00:05:10.609 "iscsi_target_node_request_logout", 00:05:10.609 "iscsi_target_node_set_redirect", 00:05:10.609 "iscsi_target_node_set_auth", 00:05:10.609 "iscsi_target_node_add_lun", 00:05:10.609 "iscsi_get_stats", 00:05:10.609 "iscsi_get_connections", 00:05:10.609 "iscsi_portal_group_set_auth", 00:05:10.609 "iscsi_start_portal_group", 00:05:10.609 "iscsi_delete_portal_group", 00:05:10.609 "iscsi_create_portal_group", 00:05:10.609 "iscsi_get_portal_groups", 00:05:10.609 "iscsi_delete_target_node", 00:05:10.609 "iscsi_target_node_remove_pg_ig_maps", 00:05:10.609 "iscsi_target_node_add_pg_ig_maps", 00:05:10.609 "iscsi_create_target_node", 00:05:10.609 "iscsi_get_target_nodes", 00:05:10.609 "iscsi_delete_initiator_group", 00:05:10.609 "iscsi_initiator_group_remove_initiators", 00:05:10.609 "iscsi_initiator_group_add_initiators", 00:05:10.609 "iscsi_create_initiator_group", 00:05:10.609 "iscsi_get_initiator_groups", 00:05:10.609 "nvmf_set_crdt", 00:05:10.609 "nvmf_set_config", 00:05:10.609 "nvmf_set_max_subsystems", 00:05:10.609 "nvmf_stop_mdns_prr", 00:05:10.609 "nvmf_publish_mdns_prr", 00:05:10.609 "nvmf_subsystem_get_listeners", 00:05:10.609 "nvmf_subsystem_get_qpairs", 00:05:10.609 "nvmf_subsystem_get_controllers", 00:05:10.609 "nvmf_get_stats", 00:05:10.609 "nvmf_get_transports", 00:05:10.609 "nvmf_create_transport", 00:05:10.609 "nvmf_get_targets", 00:05:10.609 "nvmf_delete_target", 00:05:10.610 "nvmf_create_target", 00:05:10.610 "nvmf_subsystem_allow_any_host", 00:05:10.610 "nvmf_subsystem_remove_host", 00:05:10.610 "nvmf_subsystem_add_host", 00:05:10.610 "nvmf_ns_remove_host", 00:05:10.610 "nvmf_ns_add_host", 00:05:10.610 "nvmf_subsystem_remove_ns", 00:05:10.610 "nvmf_subsystem_add_ns", 00:05:10.610 "nvmf_subsystem_listener_set_ana_state", 00:05:10.610 "nvmf_discovery_get_referrals", 00:05:10.610 "nvmf_discovery_remove_referral", 00:05:10.610 "nvmf_discovery_add_referral", 00:05:10.610 "nvmf_subsystem_remove_listener", 00:05:10.610 "nvmf_subsystem_add_listener", 00:05:10.610 "nvmf_delete_subsystem", 00:05:10.610 "nvmf_create_subsystem", 00:05:10.610 "nvmf_get_subsystems", 00:05:10.610 "env_dpdk_get_mem_stats", 00:05:10.610 "nbd_get_disks", 00:05:10.610 "nbd_stop_disk", 00:05:10.610 "nbd_start_disk", 00:05:10.610 "ublk_recover_disk", 00:05:10.610 "ublk_get_disks", 00:05:10.610 "ublk_stop_disk", 00:05:10.610 "ublk_start_disk", 00:05:10.610 "ublk_destroy_target", 00:05:10.610 "ublk_create_target", 00:05:10.610 "virtio_blk_create_transport", 00:05:10.610 "virtio_blk_get_transports", 00:05:10.610 "vhost_controller_set_coalescing", 00:05:10.610 "vhost_get_controllers", 00:05:10.610 "vhost_delete_controller", 00:05:10.610 "vhost_create_blk_controller", 00:05:10.610 "vhost_scsi_controller_remove_target", 00:05:10.610 "vhost_scsi_controller_add_target", 00:05:10.610 "vhost_start_scsi_controller", 00:05:10.610 "vhost_create_scsi_controller", 00:05:10.610 "thread_set_cpumask", 00:05:10.610 "framework_get_governor", 00:05:10.610 "framework_get_scheduler", 00:05:10.610 "framework_set_scheduler", 00:05:10.610 "framework_get_reactors", 00:05:10.610 "thread_get_io_channels", 00:05:10.610 "thread_get_pollers", 00:05:10.610 "thread_get_stats", 00:05:10.610 "framework_monitor_context_switch", 00:05:10.610 "spdk_kill_instance", 00:05:10.610 "log_enable_timestamps", 00:05:10.610 "log_get_flags", 00:05:10.610 "log_clear_flag", 00:05:10.610 "log_set_flag", 00:05:10.610 "log_get_level", 00:05:10.610 "log_set_level", 00:05:10.610 "log_get_print_level", 00:05:10.610 "log_set_print_level", 00:05:10.610 "framework_enable_cpumask_locks", 00:05:10.610 "framework_disable_cpumask_locks", 00:05:10.610 "framework_wait_init", 00:05:10.610 "framework_start_init", 00:05:10.610 "scsi_get_devices", 00:05:10.610 "bdev_get_histogram", 00:05:10.610 "bdev_enable_histogram", 00:05:10.610 "bdev_set_qos_limit", 00:05:10.610 "bdev_set_qd_sampling_period", 00:05:10.610 "bdev_get_bdevs", 00:05:10.610 "bdev_reset_iostat", 00:05:10.610 "bdev_get_iostat", 00:05:10.610 "bdev_examine", 00:05:10.610 "bdev_wait_for_examine", 00:05:10.610 "bdev_set_options", 00:05:10.610 "notify_get_notifications", 00:05:10.610 "notify_get_types", 00:05:10.610 "accel_get_stats", 00:05:10.610 "accel_set_options", 00:05:10.610 "accel_set_driver", 00:05:10.610 "accel_crypto_key_destroy", 00:05:10.610 "accel_crypto_keys_get", 00:05:10.610 "accel_crypto_key_create", 00:05:10.610 "accel_assign_opc", 00:05:10.610 "accel_get_module_info", 00:05:10.610 "accel_get_opc_assignments", 00:05:10.610 "vmd_rescan", 00:05:10.610 "vmd_remove_device", 00:05:10.610 "vmd_enable", 00:05:10.610 "sock_get_default_impl", 00:05:10.610 "sock_set_default_impl", 00:05:10.610 "sock_impl_set_options", 00:05:10.610 "sock_impl_get_options", 00:05:10.610 "iobuf_get_stats", 00:05:10.610 "iobuf_set_options", 00:05:10.610 "framework_get_pci_devices", 00:05:10.610 "framework_get_config", 00:05:10.610 "framework_get_subsystems", 00:05:10.610 "trace_get_info", 00:05:10.610 "trace_get_tpoint_group_mask", 00:05:10.610 "trace_disable_tpoint_group", 00:05:10.610 "trace_enable_tpoint_group", 00:05:10.610 "trace_clear_tpoint_mask", 00:05:10.610 "trace_set_tpoint_mask", 00:05:10.610 "keyring_get_keys", 00:05:10.610 "spdk_get_version", 00:05:10.610 "rpc_get_methods" 00:05:10.610 ] 00:05:10.867 08:18:02 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:10.867 08:18:02 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:10.867 08:18:02 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:10.867 08:18:02 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:10.867 08:18:02 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 59828 00:05:10.867 08:18:02 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 59828 ']' 00:05:10.867 08:18:02 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 59828 00:05:10.867 08:18:02 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:05:10.867 08:18:02 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:10.867 08:18:02 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59828 00:05:10.867 08:18:02 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:10.867 08:18:02 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:10.867 killing process with pid 59828 00:05:10.867 08:18:02 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59828' 00:05:10.867 08:18:02 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 59828 00:05:10.867 08:18:02 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 59828 00:05:11.132 ************************************ 00:05:11.132 END TEST spdkcli_tcp 00:05:11.132 ************************************ 00:05:11.132 00:05:11.132 real 0m1.888s 00:05:11.132 user 0m3.505s 00:05:11.132 sys 0m0.533s 00:05:11.132 08:18:03 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:11.132 08:18:03 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:11.132 08:18:03 -- common/autotest_common.sh@1142 -- # return 0 00:05:11.132 08:18:03 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:11.132 08:18:03 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:11.132 08:18:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:11.132 08:18:03 -- common/autotest_common.sh@10 -- # set +x 00:05:11.390 ************************************ 00:05:11.390 START TEST dpdk_mem_utility 00:05:11.390 ************************************ 00:05:11.390 08:18:03 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:11.390 * Looking for test storage... 00:05:11.390 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:11.390 08:18:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:11.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.390 08:18:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59913 00:05:11.390 08:18:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59913 00:05:11.390 08:18:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:11.390 08:18:03 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 59913 ']' 00:05:11.390 08:18:03 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.390 08:18:03 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:11.390 08:18:03 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.390 08:18:03 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:11.390 08:18:03 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:11.390 [2024-07-15 08:18:03.454753] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:11.390 [2024-07-15 08:18:03.455436] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59913 ] 00:05:11.648 [2024-07-15 08:18:03.590069] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.648 [2024-07-15 08:18:03.710477] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.648 [2024-07-15 08:18:03.764691] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:12.614 08:18:04 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:12.614 08:18:04 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:05:12.614 08:18:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:12.614 08:18:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:12.614 08:18:04 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.614 08:18:04 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:12.614 { 00:05:12.614 "filename": "/tmp/spdk_mem_dump.txt" 00:05:12.614 } 00:05:12.614 08:18:04 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:12.614 08:18:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:12.614 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:12.614 1 heaps totaling size 814.000000 MiB 00:05:12.614 size: 814.000000 MiB heap id: 0 00:05:12.614 end heaps---------- 00:05:12.614 8 mempools totaling size 598.116089 MiB 00:05:12.614 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:12.614 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:12.614 size: 84.521057 MiB name: bdev_io_59913 00:05:12.614 size: 51.011292 MiB name: evtpool_59913 00:05:12.614 size: 50.003479 MiB name: msgpool_59913 00:05:12.614 size: 21.763794 MiB name: PDU_Pool 00:05:12.614 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:12.614 size: 0.026123 MiB name: Session_Pool 00:05:12.614 end mempools------- 00:05:12.614 6 memzones totaling size 4.142822 MiB 00:05:12.614 size: 1.000366 MiB name: RG_ring_0_59913 00:05:12.614 size: 1.000366 MiB name: RG_ring_1_59913 00:05:12.614 size: 1.000366 MiB name: RG_ring_4_59913 00:05:12.614 size: 1.000366 MiB name: RG_ring_5_59913 00:05:12.614 size: 0.125366 MiB name: RG_ring_2_59913 00:05:12.614 size: 0.015991 MiB name: RG_ring_3_59913 00:05:12.614 end memzones------- 00:05:12.614 08:18:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:12.614 heap id: 0 total size: 814.000000 MiB number of busy elements: 304 number of free elements: 15 00:05:12.614 list of free elements. size: 12.471191 MiB 00:05:12.614 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:12.614 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:12.614 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:12.614 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:12.614 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:12.614 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:12.614 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:12.614 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:12.614 element at address: 0x200000200000 with size: 0.833191 MiB 00:05:12.614 element at address: 0x20001aa00000 with size: 0.568604 MiB 00:05:12.614 element at address: 0x20000b200000 with size: 0.488892 MiB 00:05:12.614 element at address: 0x200000800000 with size: 0.486145 MiB 00:05:12.614 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:12.614 element at address: 0x200027e00000 with size: 0.395752 MiB 00:05:12.614 element at address: 0x200003a00000 with size: 0.347839 MiB 00:05:12.614 list of standard malloc elements. size: 199.266235 MiB 00:05:12.614 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:12.614 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:12.614 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:12.614 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:12.614 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:12.614 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:12.614 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:12.614 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:12.614 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:12.614 element at address: 0x2000002d54c0 with size: 0.000183 MiB 00:05:12.614 element at address: 0x2000002d5580 with size: 0.000183 MiB 00:05:12.614 element at address: 0x2000002d5640 with size: 0.000183 MiB 00:05:12.614 element at address: 0x2000002d5700 with size: 0.000183 MiB 00:05:12.614 element at address: 0x2000002d57c0 with size: 0.000183 MiB 00:05:12.614 element at address: 0x2000002d5880 with size: 0.000183 MiB 00:05:12.614 element at address: 0x2000002d5940 with size: 0.000183 MiB 00:05:12.614 element at address: 0x2000002d5a00 with size: 0.000183 MiB 00:05:12.614 element at address: 0x2000002d5ac0 with size: 0.000183 MiB 00:05:12.614 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:05:12.614 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:05:12.614 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:05:12.614 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:05:12.614 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:05:12.614 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:05:12.614 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:05:12.614 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:05:12.614 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:05:12.614 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:05:12.614 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:05:12.614 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:05:12.614 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:05:12.614 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:05:12.614 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:05:12.614 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:05:12.614 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:05:12.614 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:05:12.614 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:05:12.614 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:05:12.615 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:05:12.615 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:05:12.615 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:05:12.615 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:05:12.615 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:05:12.615 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:05:12.615 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:05:12.615 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:05:12.615 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:05:12.615 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:05:12.615 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:05:12.615 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:05:12.615 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:05:12.615 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:05:12.615 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:05:12.615 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:05:12.615 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:05:12.615 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:05:12.615 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:05:12.615 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:05:12.615 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:12.615 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:12.615 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:12.615 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:12.615 element at address: 0x20000087c740 with size: 0.000183 MiB 00:05:12.615 element at address: 0x20000087c800 with size: 0.000183 MiB 00:05:12.615 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:05:12.615 element at address: 0x20000087c980 with size: 0.000183 MiB 00:05:12.615 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:05:12.615 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:05:12.615 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:05:12.615 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:05:12.615 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:05:12.615 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:12.615 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:12.615 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:12.615 element at address: 0x200003a590c0 with size: 0.000183 MiB 00:05:12.615 element at address: 0x200003a59180 with size: 0.000183 MiB 00:05:12.615 element at address: 0x200003a59240 with size: 0.000183 MiB 00:05:12.615 element at address: 0x200003a59300 with size: 0.000183 MiB 00:05:12.615 element at address: 0x200003a593c0 with size: 0.000183 MiB 00:05:12.615 element at address: 0x200003a59480 with size: 0.000183 MiB 00:05:12.615 element at address: 0x200003a59540 with size: 0.000183 MiB 00:05:12.615 element at address: 0x200003a59600 with size: 0.000183 MiB 00:05:12.615 element at address: 0x200003a596c0 with size: 0.000183 MiB 00:05:12.615 element at address: 0x200003a59780 with size: 0.000183 MiB 00:05:12.615 element at address: 0x200003a59840 with size: 0.000183 MiB 00:05:12.615 element at address: 0x200003a59900 with size: 0.000183 MiB 00:05:12.615 element at address: 0x200003a599c0 with size: 0.000183 MiB 00:05:12.615 element at address: 0x200003a59a80 with size: 0.000183 MiB 00:05:12.615 element at address: 0x200003a59b40 with size: 0.000183 MiB 00:05:12.615 element at address: 0x200003a59c00 with size: 0.000183 MiB 00:05:12.615 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:05:12.615 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:05:12.615 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:05:12.615 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:05:12.615 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:05:12.615 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:05:12.615 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:05:12.615 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:05:12.615 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:05:12.615 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:05:12.615 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:05:12.615 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:05:12.615 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:05:12.615 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:05:12.615 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:05:12.615 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:05:12.615 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:05:12.615 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:05:12.615 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:05:12.615 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:05:12.615 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:05:12.615 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:05:12.615 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:05:12.615 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:05:12.615 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:05:12.615 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:05:12.615 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:12.615 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:12.615 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:12.615 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:12.615 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:12.615 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:12.615 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:12.615 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:12.615 element at address: 0x20000b27d280 with size: 0.000183 MiB 00:05:12.615 element at address: 0x20000b27d340 with size: 0.000183 MiB 00:05:12.615 element at address: 0x20000b27d400 with size: 0.000183 MiB 00:05:12.615 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:05:12.615 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:05:12.615 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:05:12.615 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:05:12.615 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:05:12.615 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:05:12.615 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:05:12.615 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:12.615 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:12.615 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:12.615 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:12.615 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:12.615 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:12.615 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:12.615 element at address: 0x20001aa91900 with size: 0.000183 MiB 00:05:12.615 element at address: 0x20001aa919c0 with size: 0.000183 MiB 00:05:12.615 element at address: 0x20001aa91a80 with size: 0.000183 MiB 00:05:12.615 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:05:12.615 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:05:12.615 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:05:12.615 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:05:12.615 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:05:12.615 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:05:12.615 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:05:12.615 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:05:12.615 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:05:12.615 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:05:12.615 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:05:12.615 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:05:12.615 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:05:12.615 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:05:12.615 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:05:12.615 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:05:12.615 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:05:12.615 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:05:12.615 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:05:12.615 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:05:12.615 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:05:12.615 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:05:12.615 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:05:12.615 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:05:12.615 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:05:12.615 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:05:12.615 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:05:12.615 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:05:12.615 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:05:12.615 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:05:12.615 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:05:12.615 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:05:12.615 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:05:12.615 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:05:12.616 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:05:12.616 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:05:12.616 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:05:12.616 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:05:12.616 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:05:12.616 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:05:12.616 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:05:12.616 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:05:12.616 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:05:12.616 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:05:12.616 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:05:12.616 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:05:12.616 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:05:12.616 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:05:12.616 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:05:12.616 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:05:12.616 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:05:12.616 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:05:12.616 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:05:12.616 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:05:12.616 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:05:12.616 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:05:12.616 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:05:12.616 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:05:12.616 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:05:12.616 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:05:12.616 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:05:12.616 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:05:12.616 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:05:12.616 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:05:12.616 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:05:12.616 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:05:12.616 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:05:12.616 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:05:12.616 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:05:12.616 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:05:12.616 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:05:12.616 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:05:12.616 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:05:12.616 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:05:12.616 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:05:12.616 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:12.616 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:12.616 element at address: 0x200027e65500 with size: 0.000183 MiB 00:05:12.616 element at address: 0x200027e655c0 with size: 0.000183 MiB 00:05:12.616 element at address: 0x200027e6c1c0 with size: 0.000183 MiB 00:05:12.616 element at address: 0x200027e6c3c0 with size: 0.000183 MiB 00:05:12.616 element at address: 0x200027e6c480 with size: 0.000183 MiB 00:05:12.616 element at address: 0x200027e6c540 with size: 0.000183 MiB 00:05:12.616 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:05:12.616 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:05:12.616 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:05:12.616 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:05:12.616 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:05:12.616 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:05:12.616 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:05:12.616 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:05:12.616 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:05:12.616 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:05:12.616 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:05:12.616 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:05:12.616 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:05:12.616 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:05:12.616 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:05:12.616 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:05:12.616 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:05:12.616 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:05:12.616 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:05:12.616 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:05:12.616 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:05:12.616 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:05:12.616 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:05:12.616 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:05:12.616 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:05:12.616 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:05:12.616 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:05:12.616 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:05:12.616 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:05:12.616 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:05:12.616 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:05:12.616 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:05:12.616 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:05:12.616 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:05:12.616 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:05:12.616 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:05:12.616 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:05:12.616 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:05:12.616 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:05:12.616 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:05:12.616 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:05:12.616 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:05:12.616 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:05:12.616 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:05:12.616 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:05:12.616 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:05:12.616 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:05:12.616 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:05:12.616 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:05:12.616 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:05:12.616 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:05:12.616 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:05:12.616 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:05:12.616 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:05:12.616 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:05:12.616 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:05:12.616 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:05:12.616 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:05:12.616 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:05:12.616 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:05:12.616 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:05:12.616 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:05:12.616 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:05:12.616 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:05:12.616 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:05:12.616 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:05:12.616 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:05:12.616 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:05:12.616 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:05:12.616 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:05:12.616 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:05:12.616 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:05:12.616 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:05:12.616 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:05:12.616 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:05:12.616 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:12.616 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:12.616 list of memzone associated elements. size: 602.262573 MiB 00:05:12.616 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:12.616 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:12.616 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:12.616 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:12.616 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:12.616 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_59913_0 00:05:12.616 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:12.616 associated memzone info: size: 48.002930 MiB name: MP_evtpool_59913_0 00:05:12.616 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:12.616 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59913_0 00:05:12.616 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:12.616 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:12.616 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:12.616 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:12.616 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:12.616 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_59913 00:05:12.616 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:12.616 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59913 00:05:12.616 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:12.616 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59913 00:05:12.616 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:12.616 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:12.616 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:12.616 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:12.616 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:12.616 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:12.616 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:12.616 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:12.616 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:12.616 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59913 00:05:12.616 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:12.616 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59913 00:05:12.616 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:12.616 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59913 00:05:12.617 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:12.617 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59913 00:05:12.617 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:12.617 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59913 00:05:12.617 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:12.617 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:12.617 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:12.617 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:12.617 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:12.617 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:12.617 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:12.617 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59913 00:05:12.617 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:12.617 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:12.617 element at address: 0x200027e65680 with size: 0.023743 MiB 00:05:12.617 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:12.617 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:12.617 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59913 00:05:12.617 element at address: 0x200027e6b7c0 with size: 0.002441 MiB 00:05:12.617 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:12.617 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:05:12.617 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59913 00:05:12.617 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:12.617 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59913 00:05:12.617 element at address: 0x200027e6c280 with size: 0.000305 MiB 00:05:12.617 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:12.617 08:18:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:12.617 08:18:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59913 00:05:12.617 08:18:04 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 59913 ']' 00:05:12.617 08:18:04 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 59913 00:05:12.617 08:18:04 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:05:12.617 08:18:04 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:12.617 08:18:04 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59913 00:05:12.617 killing process with pid 59913 00:05:12.617 08:18:04 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:12.617 08:18:04 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:12.617 08:18:04 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59913' 00:05:12.617 08:18:04 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 59913 00:05:12.617 08:18:04 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 59913 00:05:12.875 00:05:12.875 real 0m1.724s 00:05:12.875 user 0m1.890s 00:05:12.875 sys 0m0.430s 00:05:12.875 08:18:05 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:12.875 ************************************ 00:05:12.875 END TEST dpdk_mem_utility 00:05:12.875 ************************************ 00:05:12.875 08:18:05 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:13.133 08:18:05 -- common/autotest_common.sh@1142 -- # return 0 00:05:13.133 08:18:05 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:13.133 08:18:05 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:13.133 08:18:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:13.133 08:18:05 -- common/autotest_common.sh@10 -- # set +x 00:05:13.133 ************************************ 00:05:13.133 START TEST event 00:05:13.133 ************************************ 00:05:13.133 08:18:05 event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:13.133 * Looking for test storage... 00:05:13.133 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:13.133 08:18:05 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:13.133 08:18:05 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:13.133 08:18:05 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:13.133 08:18:05 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:13.133 08:18:05 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:13.133 08:18:05 event -- common/autotest_common.sh@10 -- # set +x 00:05:13.133 ************************************ 00:05:13.133 START TEST event_perf 00:05:13.133 ************************************ 00:05:13.133 08:18:05 event.event_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:13.133 Running I/O for 1 seconds...[2024-07-15 08:18:05.200608] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:13.133 [2024-07-15 08:18:05.200759] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59990 ] 00:05:13.390 [2024-07-15 08:18:05.349359] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:13.390 [2024-07-15 08:18:05.494664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:13.390 [2024-07-15 08:18:05.494755] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:13.390 [2024-07-15 08:18:05.495647] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:13.390 [2024-07-15 08:18:05.495662] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.762 Running I/O for 1 seconds... 00:05:14.762 lcore 0: 201942 00:05:14.762 lcore 1: 201941 00:05:14.762 lcore 2: 201940 00:05:14.762 lcore 3: 201941 00:05:14.762 done. 00:05:14.762 00:05:14.762 real 0m1.408s 00:05:14.762 user 0m4.200s 00:05:14.762 sys 0m0.082s 00:05:14.762 08:18:06 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:14.762 08:18:06 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:14.762 ************************************ 00:05:14.762 END TEST event_perf 00:05:14.762 ************************************ 00:05:14.762 08:18:06 event -- common/autotest_common.sh@1142 -- # return 0 00:05:14.762 08:18:06 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:14.762 08:18:06 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:14.762 08:18:06 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:14.762 08:18:06 event -- common/autotest_common.sh@10 -- # set +x 00:05:14.762 ************************************ 00:05:14.762 START TEST event_reactor 00:05:14.762 ************************************ 00:05:14.762 08:18:06 event.event_reactor -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:14.762 [2024-07-15 08:18:06.652420] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:14.762 [2024-07-15 08:18:06.652594] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60029 ] 00:05:14.762 [2024-07-15 08:18:06.790430] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.762 [2024-07-15 08:18:06.907257] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.135 test_start 00:05:16.136 oneshot 00:05:16.136 tick 100 00:05:16.136 tick 100 00:05:16.136 tick 250 00:05:16.136 tick 100 00:05:16.136 tick 100 00:05:16.136 tick 100 00:05:16.136 tick 250 00:05:16.136 tick 500 00:05:16.136 tick 100 00:05:16.136 tick 100 00:05:16.136 tick 250 00:05:16.136 tick 100 00:05:16.136 tick 100 00:05:16.136 test_end 00:05:16.136 ************************************ 00:05:16.136 END TEST event_reactor 00:05:16.136 ************************************ 00:05:16.136 00:05:16.136 real 0m1.360s 00:05:16.136 user 0m1.202s 00:05:16.136 sys 0m0.052s 00:05:16.136 08:18:07 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:16.136 08:18:07 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:16.136 08:18:08 event -- common/autotest_common.sh@1142 -- # return 0 00:05:16.136 08:18:08 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:16.136 08:18:08 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:16.136 08:18:08 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:16.136 08:18:08 event -- common/autotest_common.sh@10 -- # set +x 00:05:16.136 ************************************ 00:05:16.136 START TEST event_reactor_perf 00:05:16.136 ************************************ 00:05:16.136 08:18:08 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:16.136 [2024-07-15 08:18:08.062188] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:16.136 [2024-07-15 08:18:08.062294] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60059 ] 00:05:16.136 [2024-07-15 08:18:08.201669] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.394 [2024-07-15 08:18:08.317148] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.327 test_start 00:05:17.327 test_end 00:05:17.327 Performance: 377834 events per second 00:05:17.327 ************************************ 00:05:17.327 END TEST event_reactor_perf 00:05:17.327 ************************************ 00:05:17.327 00:05:17.327 real 0m1.359s 00:05:17.327 user 0m1.196s 00:05:17.327 sys 0m0.055s 00:05:17.327 08:18:09 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:17.327 08:18:09 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:17.327 08:18:09 event -- common/autotest_common.sh@1142 -- # return 0 00:05:17.327 08:18:09 event -- event/event.sh@49 -- # uname -s 00:05:17.327 08:18:09 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:17.327 08:18:09 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:17.327 08:18:09 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:17.327 08:18:09 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.327 08:18:09 event -- common/autotest_common.sh@10 -- # set +x 00:05:17.327 ************************************ 00:05:17.327 START TEST event_scheduler 00:05:17.327 ************************************ 00:05:17.327 08:18:09 event.event_scheduler -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:17.585 * Looking for test storage... 00:05:17.585 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:17.585 08:18:09 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:17.585 08:18:09 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=60126 00:05:17.585 08:18:09 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:17.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.585 08:18:09 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:17.585 08:18:09 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 60126 00:05:17.585 08:18:09 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 60126 ']' 00:05:17.585 08:18:09 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.585 08:18:09 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:17.585 08:18:09 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.585 08:18:09 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:17.585 08:18:09 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:17.585 [2024-07-15 08:18:09.598523] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:17.585 [2024-07-15 08:18:09.598648] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60126 ] 00:05:17.585 [2024-07-15 08:18:09.739025] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:17.855 [2024-07-15 08:18:09.873027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.855 [2024-07-15 08:18:09.873124] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:17.855 [2024-07-15 08:18:09.873270] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:17.855 [2024-07-15 08:18:09.873275] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:18.790 08:18:10 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:18.790 08:18:10 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:05:18.790 08:18:10 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:18.790 08:18:10 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:18.790 08:18:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:18.790 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:18.790 POWER: Cannot set governor of lcore 0 to userspace 00:05:18.790 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:18.790 POWER: Cannot set governor of lcore 0 to performance 00:05:18.790 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:18.790 POWER: Cannot set governor of lcore 0 to userspace 00:05:18.790 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:18.790 POWER: Cannot set governor of lcore 0 to userspace 00:05:18.790 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:18.790 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:18.790 POWER: Unable to set Power Management Environment for lcore 0 00:05:18.790 [2024-07-15 08:18:10.639791] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:18.790 [2024-07-15 08:18:10.639805] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:18.790 [2024-07-15 08:18:10.639814] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:18.790 [2024-07-15 08:18:10.639826] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:18.790 [2024-07-15 08:18:10.639841] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:18.790 [2024-07-15 08:18:10.639848] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:18.790 08:18:10 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:18.790 08:18:10 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:18.790 08:18:10 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:18.790 08:18:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:18.790 [2024-07-15 08:18:10.707209] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:18.790 [2024-07-15 08:18:10.747910] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:18.790 08:18:10 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:18.790 08:18:10 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:18.790 08:18:10 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:18.790 08:18:10 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.790 08:18:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:18.790 ************************************ 00:05:18.790 START TEST scheduler_create_thread 00:05:18.790 ************************************ 00:05:18.790 08:18:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:05:18.790 08:18:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:18.790 08:18:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:18.790 08:18:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:18.790 2 00:05:18.790 08:18:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:18.790 08:18:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:18.790 08:18:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:18.790 08:18:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:18.790 3 00:05:18.790 08:18:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:18.790 08:18:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:18.790 08:18:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:18.790 08:18:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:18.790 4 00:05:18.790 08:18:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:18.790 08:18:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:18.790 08:18:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:18.790 08:18:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:18.790 5 00:05:18.790 08:18:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:18.790 08:18:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:18.790 08:18:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:18.790 08:18:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:18.790 6 00:05:18.790 08:18:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:18.790 08:18:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:18.790 08:18:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:18.790 08:18:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:18.790 7 00:05:18.790 08:18:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:18.790 08:18:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:18.790 08:18:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:18.790 08:18:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:18.790 8 00:05:18.790 08:18:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:18.790 08:18:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:18.790 08:18:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:18.790 08:18:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:18.790 9 00:05:18.790 08:18:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:18.790 08:18:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:18.790 08:18:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:18.790 08:18:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:18.790 10 00:05:18.790 08:18:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:18.790 08:18:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:18.790 08:18:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:18.790 08:18:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:18.790 08:18:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:18.790 08:18:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:18.790 08:18:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:18.790 08:18:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:18.790 08:18:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:18.790 08:18:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:18.790 08:18:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:18.790 08:18:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:18.790 08:18:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.164 08:18:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:20.164 08:18:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:20.164 08:18:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:20.164 08:18:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.164 08:18:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:21.565 ************************************ 00:05:21.565 END TEST scheduler_create_thread 00:05:21.565 ************************************ 00:05:21.565 08:18:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:21.565 00:05:21.565 real 0m2.612s 00:05:21.565 user 0m0.019s 00:05:21.565 sys 0m0.005s 00:05:21.565 08:18:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:21.565 08:18:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:21.565 08:18:13 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:05:21.565 08:18:13 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:21.565 08:18:13 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 60126 00:05:21.565 08:18:13 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 60126 ']' 00:05:21.565 08:18:13 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 60126 00:05:21.565 08:18:13 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:05:21.565 08:18:13 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:21.565 08:18:13 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60126 00:05:21.565 killing process with pid 60126 00:05:21.565 08:18:13 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:21.565 08:18:13 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:21.565 08:18:13 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60126' 00:05:21.565 08:18:13 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 60126 00:05:21.565 08:18:13 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 60126 00:05:21.824 [2024-07-15 08:18:13.852514] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:22.082 00:05:22.082 real 0m4.658s 00:05:22.082 user 0m8.846s 00:05:22.082 sys 0m0.372s 00:05:22.082 08:18:14 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:22.082 08:18:14 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:22.082 ************************************ 00:05:22.082 END TEST event_scheduler 00:05:22.082 ************************************ 00:05:22.082 08:18:14 event -- common/autotest_common.sh@1142 -- # return 0 00:05:22.082 08:18:14 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:22.082 08:18:14 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:22.082 08:18:14 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:22.082 08:18:14 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.082 08:18:14 event -- common/autotest_common.sh@10 -- # set +x 00:05:22.082 ************************************ 00:05:22.082 START TEST app_repeat 00:05:22.082 ************************************ 00:05:22.082 08:18:14 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:05:22.082 08:18:14 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.082 08:18:14 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.082 08:18:14 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:22.082 08:18:14 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:22.082 08:18:14 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:22.082 08:18:14 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:22.082 08:18:14 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:22.082 Process app_repeat pid: 60220 00:05:22.082 spdk_app_start Round 0 00:05:22.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:22.082 08:18:14 event.app_repeat -- event/event.sh@19 -- # repeat_pid=60220 00:05:22.082 08:18:14 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:22.082 08:18:14 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:22.082 08:18:14 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 60220' 00:05:22.082 08:18:14 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:22.082 08:18:14 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:22.082 08:18:14 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60220 /var/tmp/spdk-nbd.sock 00:05:22.082 08:18:14 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 60220 ']' 00:05:22.082 08:18:14 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:22.082 08:18:14 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:22.082 08:18:14 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:22.082 08:18:14 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:22.082 08:18:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:22.082 [2024-07-15 08:18:14.205995] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:22.082 [2024-07-15 08:18:14.206085] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60220 ] 00:05:22.340 [2024-07-15 08:18:14.342884] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:22.340 [2024-07-15 08:18:14.475087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:22.340 [2024-07-15 08:18:14.475102] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.598 [2024-07-15 08:18:14.532004] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:23.163 08:18:15 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:23.163 08:18:15 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:23.163 08:18:15 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:23.422 Malloc0 00:05:23.422 08:18:15 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:23.680 Malloc1 00:05:23.680 08:18:15 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:23.680 08:18:15 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.680 08:18:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:23.680 08:18:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:23.680 08:18:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.680 08:18:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:23.680 08:18:15 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:23.680 08:18:15 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.680 08:18:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:23.680 08:18:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:23.680 08:18:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.680 08:18:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:23.680 08:18:15 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:23.680 08:18:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:23.680 08:18:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:23.680 08:18:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:23.938 /dev/nbd0 00:05:23.938 08:18:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:23.938 08:18:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:23.938 08:18:16 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:23.938 08:18:16 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:23.938 08:18:16 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:23.938 08:18:16 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:23.938 08:18:16 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:23.938 08:18:16 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:23.938 08:18:16 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:23.938 08:18:16 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:23.938 08:18:16 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:23.938 1+0 records in 00:05:23.938 1+0 records out 00:05:23.938 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000227825 s, 18.0 MB/s 00:05:23.938 08:18:16 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:23.938 08:18:16 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:23.939 08:18:16 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:23.939 08:18:16 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:23.939 08:18:16 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:23.939 08:18:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:23.939 08:18:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:23.939 08:18:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:24.197 /dev/nbd1 00:05:24.197 08:18:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:24.197 08:18:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:24.197 08:18:16 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:24.197 08:18:16 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:24.197 08:18:16 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:24.197 08:18:16 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:24.197 08:18:16 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:24.197 08:18:16 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:24.197 08:18:16 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:24.197 08:18:16 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:24.197 08:18:16 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:24.197 1+0 records in 00:05:24.197 1+0 records out 00:05:24.197 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000733439 s, 5.6 MB/s 00:05:24.197 08:18:16 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:24.197 08:18:16 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:24.197 08:18:16 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:24.197 08:18:16 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:24.197 08:18:16 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:24.197 08:18:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:24.197 08:18:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:24.197 08:18:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:24.197 08:18:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.197 08:18:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:24.763 08:18:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:24.763 { 00:05:24.763 "nbd_device": "/dev/nbd0", 00:05:24.763 "bdev_name": "Malloc0" 00:05:24.763 }, 00:05:24.763 { 00:05:24.763 "nbd_device": "/dev/nbd1", 00:05:24.763 "bdev_name": "Malloc1" 00:05:24.763 } 00:05:24.763 ]' 00:05:24.763 08:18:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:24.763 { 00:05:24.763 "nbd_device": "/dev/nbd0", 00:05:24.763 "bdev_name": "Malloc0" 00:05:24.763 }, 00:05:24.763 { 00:05:24.763 "nbd_device": "/dev/nbd1", 00:05:24.763 "bdev_name": "Malloc1" 00:05:24.763 } 00:05:24.763 ]' 00:05:24.763 08:18:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:24.763 08:18:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:24.763 /dev/nbd1' 00:05:24.763 08:18:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:24.763 /dev/nbd1' 00:05:24.763 08:18:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:24.763 08:18:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:24.763 08:18:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:24.763 08:18:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:24.763 08:18:16 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:24.763 08:18:16 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:24.763 08:18:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.764 08:18:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:24.764 08:18:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:24.764 08:18:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:24.764 08:18:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:24.764 08:18:16 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:24.764 256+0 records in 00:05:24.764 256+0 records out 00:05:24.764 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0070677 s, 148 MB/s 00:05:24.764 08:18:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:24.764 08:18:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:24.764 256+0 records in 00:05:24.764 256+0 records out 00:05:24.764 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0261631 s, 40.1 MB/s 00:05:24.764 08:18:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:24.764 08:18:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:24.764 256+0 records in 00:05:24.764 256+0 records out 00:05:24.764 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0283126 s, 37.0 MB/s 00:05:24.764 08:18:16 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:24.764 08:18:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.764 08:18:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:24.764 08:18:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:24.764 08:18:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:24.764 08:18:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:24.764 08:18:16 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:24.764 08:18:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:24.764 08:18:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:24.764 08:18:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:24.764 08:18:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:24.764 08:18:16 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:24.764 08:18:16 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:24.764 08:18:16 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.764 08:18:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.764 08:18:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:24.764 08:18:16 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:24.764 08:18:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:24.764 08:18:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:25.022 08:18:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:25.022 08:18:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:25.022 08:18:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:25.022 08:18:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:25.022 08:18:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:25.022 08:18:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:25.022 08:18:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:25.022 08:18:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:25.022 08:18:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:25.022 08:18:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:25.280 08:18:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:25.280 08:18:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:25.280 08:18:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:25.280 08:18:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:25.280 08:18:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:25.280 08:18:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:25.280 08:18:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:25.280 08:18:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:25.280 08:18:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:25.280 08:18:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.280 08:18:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:25.539 08:18:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:25.539 08:18:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:25.539 08:18:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:25.797 08:18:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:25.797 08:18:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:25.797 08:18:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:25.797 08:18:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:25.797 08:18:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:25.797 08:18:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:25.797 08:18:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:25.797 08:18:17 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:25.797 08:18:17 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:25.797 08:18:17 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:26.054 08:18:18 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:26.311 [2024-07-15 08:18:18.262204] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:26.311 [2024-07-15 08:18:18.376807] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:26.311 [2024-07-15 08:18:18.376819] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.311 [2024-07-15 08:18:18.430233] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:26.311 [2024-07-15 08:18:18.430326] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:26.311 [2024-07-15 08:18:18.430342] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:29.592 08:18:21 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:29.592 spdk_app_start Round 1 00:05:29.592 08:18:21 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:29.592 08:18:21 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60220 /var/tmp/spdk-nbd.sock 00:05:29.592 08:18:21 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 60220 ']' 00:05:29.592 08:18:21 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:29.592 08:18:21 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:29.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:29.592 08:18:21 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:29.592 08:18:21 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:29.592 08:18:21 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:29.592 08:18:21 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:29.592 08:18:21 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:29.592 08:18:21 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:29.592 Malloc0 00:05:29.592 08:18:21 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:29.851 Malloc1 00:05:29.851 08:18:21 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:29.851 08:18:21 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.851 08:18:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:29.851 08:18:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:29.851 08:18:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:29.851 08:18:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:29.851 08:18:21 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:29.851 08:18:21 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.851 08:18:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:29.851 08:18:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:29.851 08:18:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:29.851 08:18:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:29.851 08:18:21 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:29.851 08:18:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:29.851 08:18:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:29.851 08:18:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:30.111 /dev/nbd0 00:05:30.111 08:18:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:30.111 08:18:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:30.111 08:18:22 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:30.111 08:18:22 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:30.111 08:18:22 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:30.111 08:18:22 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:30.111 08:18:22 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:30.111 08:18:22 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:30.111 08:18:22 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:30.111 08:18:22 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:30.111 08:18:22 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:30.111 1+0 records in 00:05:30.111 1+0 records out 00:05:30.111 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000227466 s, 18.0 MB/s 00:05:30.111 08:18:22 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:30.111 08:18:22 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:30.111 08:18:22 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:30.111 08:18:22 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:30.111 08:18:22 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:30.111 08:18:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:30.111 08:18:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:30.111 08:18:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:30.369 /dev/nbd1 00:05:30.369 08:18:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:30.369 08:18:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:30.369 08:18:22 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:30.369 08:18:22 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:30.369 08:18:22 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:30.369 08:18:22 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:30.369 08:18:22 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:30.369 08:18:22 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:30.369 08:18:22 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:30.369 08:18:22 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:30.369 08:18:22 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:30.369 1+0 records in 00:05:30.369 1+0 records out 00:05:30.369 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000340799 s, 12.0 MB/s 00:05:30.369 08:18:22 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:30.369 08:18:22 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:30.369 08:18:22 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:30.369 08:18:22 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:30.369 08:18:22 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:30.369 08:18:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:30.369 08:18:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:30.369 08:18:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:30.369 08:18:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.369 08:18:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:30.627 08:18:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:30.627 { 00:05:30.627 "nbd_device": "/dev/nbd0", 00:05:30.627 "bdev_name": "Malloc0" 00:05:30.627 }, 00:05:30.627 { 00:05:30.627 "nbd_device": "/dev/nbd1", 00:05:30.627 "bdev_name": "Malloc1" 00:05:30.627 } 00:05:30.627 ]' 00:05:30.627 08:18:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:30.627 { 00:05:30.627 "nbd_device": "/dev/nbd0", 00:05:30.627 "bdev_name": "Malloc0" 00:05:30.627 }, 00:05:30.627 { 00:05:30.627 "nbd_device": "/dev/nbd1", 00:05:30.627 "bdev_name": "Malloc1" 00:05:30.627 } 00:05:30.627 ]' 00:05:30.627 08:18:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:30.627 08:18:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:30.627 /dev/nbd1' 00:05:30.627 08:18:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:30.627 /dev/nbd1' 00:05:30.627 08:18:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:30.627 08:18:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:30.627 08:18:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:30.627 08:18:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:30.627 08:18:22 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:30.627 08:18:22 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:30.627 08:18:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.627 08:18:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:30.627 08:18:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:30.627 08:18:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:30.627 08:18:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:30.627 08:18:22 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:30.627 256+0 records in 00:05:30.627 256+0 records out 00:05:30.627 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.010168 s, 103 MB/s 00:05:30.627 08:18:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:30.627 08:18:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:30.627 256+0 records in 00:05:30.627 256+0 records out 00:05:30.627 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0292416 s, 35.9 MB/s 00:05:30.627 08:18:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:30.627 08:18:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:30.885 256+0 records in 00:05:30.885 256+0 records out 00:05:30.885 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0298136 s, 35.2 MB/s 00:05:30.885 08:18:22 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:30.885 08:18:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.885 08:18:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:30.885 08:18:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:30.885 08:18:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:30.885 08:18:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:30.885 08:18:22 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:30.885 08:18:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:30.885 08:18:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:30.885 08:18:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:30.885 08:18:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:30.885 08:18:22 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:30.885 08:18:22 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:30.885 08:18:22 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.885 08:18:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.885 08:18:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:30.885 08:18:22 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:30.885 08:18:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:30.885 08:18:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:31.142 08:18:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:31.142 08:18:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:31.142 08:18:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:31.142 08:18:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:31.142 08:18:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:31.142 08:18:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:31.142 08:18:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:31.142 08:18:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:31.142 08:18:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:31.142 08:18:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:31.400 08:18:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:31.400 08:18:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:31.400 08:18:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:31.400 08:18:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:31.400 08:18:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:31.400 08:18:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:31.400 08:18:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:31.400 08:18:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:31.400 08:18:23 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:31.400 08:18:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.400 08:18:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:31.679 08:18:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:31.679 08:18:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:31.679 08:18:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:31.679 08:18:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:31.679 08:18:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:31.679 08:18:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:31.679 08:18:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:31.679 08:18:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:31.679 08:18:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:31.679 08:18:23 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:31.679 08:18:23 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:31.679 08:18:23 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:31.679 08:18:23 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:31.937 08:18:24 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:32.195 [2024-07-15 08:18:24.231989] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:32.195 [2024-07-15 08:18:24.350158] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:32.195 [2024-07-15 08:18:24.350170] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.453 [2024-07-15 08:18:24.406441] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:32.453 [2024-07-15 08:18:24.406539] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:32.453 [2024-07-15 08:18:24.406554] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:34.980 08:18:27 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:34.980 spdk_app_start Round 2 00:05:34.980 08:18:27 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:34.980 08:18:27 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60220 /var/tmp/spdk-nbd.sock 00:05:34.980 08:18:27 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 60220 ']' 00:05:34.980 08:18:27 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:34.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:34.980 08:18:27 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:34.980 08:18:27 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:34.980 08:18:27 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:34.980 08:18:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:35.238 08:18:27 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:35.238 08:18:27 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:35.238 08:18:27 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:35.507 Malloc0 00:05:35.507 08:18:27 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:35.766 Malloc1 00:05:35.766 08:18:27 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:35.766 08:18:27 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.766 08:18:27 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:35.766 08:18:27 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:35.766 08:18:27 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.766 08:18:27 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:35.766 08:18:27 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:35.766 08:18:27 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.766 08:18:27 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:35.766 08:18:27 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:35.766 08:18:27 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.766 08:18:27 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:35.766 08:18:27 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:35.766 08:18:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:35.766 08:18:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:35.766 08:18:27 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:36.023 /dev/nbd0 00:05:36.281 08:18:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:36.281 08:18:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:36.281 08:18:28 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:36.281 08:18:28 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:36.281 08:18:28 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:36.281 08:18:28 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:36.281 08:18:28 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:36.281 08:18:28 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:36.281 08:18:28 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:36.281 08:18:28 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:36.281 08:18:28 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:36.281 1+0 records in 00:05:36.281 1+0 records out 00:05:36.281 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00030749 s, 13.3 MB/s 00:05:36.281 08:18:28 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:36.281 08:18:28 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:36.281 08:18:28 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:36.281 08:18:28 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:36.281 08:18:28 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:36.281 08:18:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:36.281 08:18:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:36.281 08:18:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:36.539 /dev/nbd1 00:05:36.539 08:18:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:36.539 08:18:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:36.539 08:18:28 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:36.539 08:18:28 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:36.539 08:18:28 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:36.539 08:18:28 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:36.539 08:18:28 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:36.539 08:18:28 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:36.539 08:18:28 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:36.539 08:18:28 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:36.539 08:18:28 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:36.539 1+0 records in 00:05:36.539 1+0 records out 00:05:36.539 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000338308 s, 12.1 MB/s 00:05:36.539 08:18:28 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:36.539 08:18:28 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:36.539 08:18:28 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:36.539 08:18:28 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:36.539 08:18:28 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:36.539 08:18:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:36.539 08:18:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:36.539 08:18:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:36.539 08:18:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.539 08:18:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:36.796 08:18:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:36.796 { 00:05:36.796 "nbd_device": "/dev/nbd0", 00:05:36.796 "bdev_name": "Malloc0" 00:05:36.796 }, 00:05:36.796 { 00:05:36.796 "nbd_device": "/dev/nbd1", 00:05:36.796 "bdev_name": "Malloc1" 00:05:36.796 } 00:05:36.796 ]' 00:05:36.796 08:18:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:36.796 08:18:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:36.796 { 00:05:36.796 "nbd_device": "/dev/nbd0", 00:05:36.796 "bdev_name": "Malloc0" 00:05:36.796 }, 00:05:36.796 { 00:05:36.796 "nbd_device": "/dev/nbd1", 00:05:36.796 "bdev_name": "Malloc1" 00:05:36.796 } 00:05:36.796 ]' 00:05:36.796 08:18:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:36.796 /dev/nbd1' 00:05:36.796 08:18:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:36.796 08:18:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:36.796 /dev/nbd1' 00:05:36.796 08:18:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:36.796 08:18:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:36.796 08:18:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:36.796 08:18:28 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:36.796 08:18:28 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:36.796 08:18:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.796 08:18:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:36.796 08:18:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:36.796 08:18:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:36.796 08:18:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:36.796 08:18:28 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:36.796 256+0 records in 00:05:36.796 256+0 records out 00:05:36.796 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00729655 s, 144 MB/s 00:05:36.796 08:18:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:36.796 08:18:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:36.796 256+0 records in 00:05:36.797 256+0 records out 00:05:36.797 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0218296 s, 48.0 MB/s 00:05:36.797 08:18:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:36.797 08:18:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:36.797 256+0 records in 00:05:36.797 256+0 records out 00:05:36.797 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0291114 s, 36.0 MB/s 00:05:36.797 08:18:28 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:36.797 08:18:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.797 08:18:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:36.797 08:18:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:36.797 08:18:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:36.797 08:18:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:36.797 08:18:28 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:36.797 08:18:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:36.797 08:18:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:36.797 08:18:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:36.797 08:18:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:36.797 08:18:28 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:36.797 08:18:28 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:36.797 08:18:28 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.797 08:18:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.797 08:18:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:36.797 08:18:28 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:36.797 08:18:28 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:36.797 08:18:28 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:37.054 08:18:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:37.055 08:18:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:37.055 08:18:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:37.055 08:18:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:37.055 08:18:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:37.055 08:18:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:37.055 08:18:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:37.055 08:18:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:37.055 08:18:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:37.055 08:18:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:37.312 08:18:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:37.312 08:18:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:37.312 08:18:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:37.312 08:18:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:37.312 08:18:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:37.312 08:18:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:37.312 08:18:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:37.312 08:18:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:37.312 08:18:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:37.312 08:18:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.312 08:18:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:37.570 08:18:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:37.570 08:18:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:37.570 08:18:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:37.570 08:18:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:37.570 08:18:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:37.570 08:18:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:37.570 08:18:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:37.570 08:18:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:37.570 08:18:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:37.570 08:18:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:37.570 08:18:29 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:37.570 08:18:29 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:37.570 08:18:29 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:37.829 08:18:29 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:38.086 [2024-07-15 08:18:30.188887] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:38.344 [2024-07-15 08:18:30.301415] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:38.344 [2024-07-15 08:18:30.301429] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.344 [2024-07-15 08:18:30.353438] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:38.344 [2024-07-15 08:18:30.353529] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:38.344 [2024-07-15 08:18:30.353544] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:40.886 08:18:32 event.app_repeat -- event/event.sh@38 -- # waitforlisten 60220 /var/tmp/spdk-nbd.sock 00:05:40.886 08:18:32 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 60220 ']' 00:05:40.886 08:18:32 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:40.886 08:18:32 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:40.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:40.886 08:18:32 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:40.886 08:18:32 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:40.886 08:18:32 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:41.152 08:18:33 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:41.152 08:18:33 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:41.152 08:18:33 event.app_repeat -- event/event.sh@39 -- # killprocess 60220 00:05:41.152 08:18:33 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 60220 ']' 00:05:41.152 08:18:33 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 60220 00:05:41.152 08:18:33 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:05:41.152 08:18:33 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:41.152 08:18:33 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60220 00:05:41.152 08:18:33 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:41.152 08:18:33 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:41.152 killing process with pid 60220 00:05:41.152 08:18:33 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60220' 00:05:41.152 08:18:33 event.app_repeat -- common/autotest_common.sh@967 -- # kill 60220 00:05:41.152 08:18:33 event.app_repeat -- common/autotest_common.sh@972 -- # wait 60220 00:05:41.418 spdk_app_start is called in Round 0. 00:05:41.418 Shutdown signal received, stop current app iteration 00:05:41.419 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 reinitialization... 00:05:41.419 spdk_app_start is called in Round 1. 00:05:41.419 Shutdown signal received, stop current app iteration 00:05:41.419 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 reinitialization... 00:05:41.419 spdk_app_start is called in Round 2. 00:05:41.419 Shutdown signal received, stop current app iteration 00:05:41.419 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 reinitialization... 00:05:41.419 spdk_app_start is called in Round 3. 00:05:41.419 Shutdown signal received, stop current app iteration 00:05:41.419 08:18:33 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:41.419 08:18:33 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:41.419 00:05:41.419 real 0m19.344s 00:05:41.419 user 0m43.459s 00:05:41.419 sys 0m2.882s 00:05:41.419 08:18:33 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.419 08:18:33 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:41.419 ************************************ 00:05:41.419 END TEST app_repeat 00:05:41.419 ************************************ 00:05:41.419 08:18:33 event -- common/autotest_common.sh@1142 -- # return 0 00:05:41.419 08:18:33 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:41.419 08:18:33 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:41.419 08:18:33 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:41.419 08:18:33 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.419 08:18:33 event -- common/autotest_common.sh@10 -- # set +x 00:05:41.419 ************************************ 00:05:41.419 START TEST cpu_locks 00:05:41.419 ************************************ 00:05:41.419 08:18:33 event.cpu_locks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:41.679 * Looking for test storage... 00:05:41.679 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:41.679 08:18:33 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:41.679 08:18:33 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:41.680 08:18:33 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:41.680 08:18:33 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:41.680 08:18:33 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:41.680 08:18:33 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.680 08:18:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:41.680 ************************************ 00:05:41.680 START TEST default_locks 00:05:41.680 ************************************ 00:05:41.680 08:18:33 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:05:41.680 08:18:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=60658 00:05:41.680 08:18:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 60658 00:05:41.680 08:18:33 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 60658 ']' 00:05:41.680 08:18:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:41.680 08:18:33 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.680 08:18:33 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:41.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.680 08:18:33 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.680 08:18:33 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:41.680 08:18:33 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:41.680 [2024-07-15 08:18:33.732424] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:41.680 [2024-07-15 08:18:33.732531] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60658 ] 00:05:41.938 [2024-07-15 08:18:33.874757] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.938 [2024-07-15 08:18:33.998066] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.938 [2024-07-15 08:18:34.054313] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:42.876 08:18:34 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:42.876 08:18:34 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:05:42.876 08:18:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 60658 00:05:42.876 08:18:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 60658 00:05:42.876 08:18:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:43.134 08:18:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 60658 00:05:43.134 08:18:35 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 60658 ']' 00:05:43.134 08:18:35 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 60658 00:05:43.134 08:18:35 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:05:43.134 08:18:35 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:43.134 08:18:35 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60658 00:05:43.134 08:18:35 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:43.134 08:18:35 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:43.134 killing process with pid 60658 00:05:43.134 08:18:35 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60658' 00:05:43.134 08:18:35 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 60658 00:05:43.134 08:18:35 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 60658 00:05:43.393 08:18:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 60658 00:05:43.393 08:18:35 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:05:43.393 08:18:35 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 60658 00:05:43.393 08:18:35 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:43.393 08:18:35 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:43.393 08:18:35 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:43.393 08:18:35 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:43.393 08:18:35 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 60658 00:05:43.393 08:18:35 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 60658 ']' 00:05:43.393 08:18:35 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.393 08:18:35 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:43.393 08:18:35 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.393 08:18:35 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:43.393 08:18:35 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:43.393 ERROR: process (pid: 60658) is no longer running 00:05:43.393 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (60658) - No such process 00:05:43.393 08:18:35 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:43.393 08:18:35 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:05:43.393 08:18:35 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:05:43.393 08:18:35 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:43.393 08:18:35 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:43.393 08:18:35 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:43.393 08:18:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:43.393 08:18:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:43.393 08:18:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:43.393 08:18:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:43.393 00:05:43.393 real 0m1.886s 00:05:43.393 user 0m2.012s 00:05:43.393 sys 0m0.573s 00:05:43.393 08:18:35 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:43.393 08:18:35 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:43.393 ************************************ 00:05:43.393 END TEST default_locks 00:05:43.393 ************************************ 00:05:43.651 08:18:35 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:43.651 08:18:35 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:43.651 08:18:35 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:43.651 08:18:35 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.651 08:18:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:43.651 ************************************ 00:05:43.651 START TEST default_locks_via_rpc 00:05:43.651 ************************************ 00:05:43.651 08:18:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:05:43.651 08:18:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:43.651 08:18:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=60710 00:05:43.651 08:18:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 60710 00:05:43.651 08:18:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 60710 ']' 00:05:43.651 08:18:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.651 08:18:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:43.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.651 08:18:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.651 08:18:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:43.651 08:18:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.651 [2024-07-15 08:18:35.653610] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:43.651 [2024-07-15 08:18:35.653709] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60710 ] 00:05:43.651 [2024-07-15 08:18:35.791640] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.910 [2024-07-15 08:18:35.921899] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.910 [2024-07-15 08:18:35.975524] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:44.845 08:18:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:44.845 08:18:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:44.845 08:18:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:44.845 08:18:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:44.845 08:18:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.845 08:18:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:44.845 08:18:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:44.845 08:18:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:44.845 08:18:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:44.845 08:18:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:44.845 08:18:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:44.845 08:18:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:44.845 08:18:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.845 08:18:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:44.845 08:18:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 60710 00:05:44.845 08:18:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:44.845 08:18:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 60710 00:05:45.102 08:18:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 60710 00:05:45.102 08:18:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 60710 ']' 00:05:45.102 08:18:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 60710 00:05:45.102 08:18:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:05:45.102 08:18:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:45.102 08:18:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60710 00:05:45.102 08:18:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:45.102 08:18:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:45.102 killing process with pid 60710 00:05:45.102 08:18:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60710' 00:05:45.102 08:18:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 60710 00:05:45.102 08:18:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 60710 00:05:45.667 00:05:45.667 real 0m2.013s 00:05:45.667 user 0m2.257s 00:05:45.667 sys 0m0.573s 00:05:45.667 08:18:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:45.667 08:18:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.667 ************************************ 00:05:45.667 END TEST default_locks_via_rpc 00:05:45.667 ************************************ 00:05:45.667 08:18:37 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:45.667 08:18:37 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:45.667 08:18:37 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:45.667 08:18:37 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.667 08:18:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:45.667 ************************************ 00:05:45.667 START TEST non_locking_app_on_locked_coremask 00:05:45.667 ************************************ 00:05:45.667 08:18:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:05:45.667 08:18:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60761 00:05:45.667 08:18:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 60761 /var/tmp/spdk.sock 00:05:45.667 08:18:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:45.667 08:18:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60761 ']' 00:05:45.667 08:18:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.667 08:18:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:45.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.667 08:18:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.667 08:18:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:45.667 08:18:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:45.667 [2024-07-15 08:18:37.732291] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:45.667 [2024-07-15 08:18:37.732408] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60761 ] 00:05:45.925 [2024-07-15 08:18:37.873454] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.925 [2024-07-15 08:18:37.995109] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.925 [2024-07-15 08:18:38.049046] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:46.556 08:18:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:46.556 08:18:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:46.556 08:18:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60777 00:05:46.556 08:18:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:46.556 08:18:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 60777 /var/tmp/spdk2.sock 00:05:46.556 08:18:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60777 ']' 00:05:46.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:46.556 08:18:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:46.556 08:18:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:46.556 08:18:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:46.557 08:18:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:46.557 08:18:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:46.814 [2024-07-15 08:18:38.708432] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:46.814 [2024-07-15 08:18:38.708521] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60777 ] 00:05:46.814 [2024-07-15 08:18:38.851958] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:46.814 [2024-07-15 08:18:38.852024] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.071 [2024-07-15 08:18:39.093813] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.072 [2024-07-15 08:18:39.204484] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:47.637 08:18:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:47.637 08:18:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:47.637 08:18:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 60761 00:05:47.637 08:18:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60761 00:05:47.637 08:18:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:48.568 08:18:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 60761 00:05:48.568 08:18:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 60761 ']' 00:05:48.568 08:18:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 60761 00:05:48.568 08:18:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:48.568 08:18:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:48.568 08:18:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60761 00:05:48.568 08:18:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:48.568 killing process with pid 60761 00:05:48.568 08:18:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:48.568 08:18:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60761' 00:05:48.568 08:18:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 60761 00:05:48.568 08:18:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 60761 00:05:49.130 08:18:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 60777 00:05:49.130 08:18:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 60777 ']' 00:05:49.130 08:18:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 60777 00:05:49.130 08:18:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:49.130 08:18:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:49.130 08:18:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60777 00:05:49.387 killing process with pid 60777 00:05:49.387 08:18:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:49.387 08:18:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:49.387 08:18:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60777' 00:05:49.387 08:18:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 60777 00:05:49.387 08:18:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 60777 00:05:49.645 00:05:49.645 real 0m4.041s 00:05:49.645 user 0m4.522s 00:05:49.645 sys 0m1.089s 00:05:49.645 08:18:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:49.645 08:18:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:49.645 ************************************ 00:05:49.645 END TEST non_locking_app_on_locked_coremask 00:05:49.645 ************************************ 00:05:49.645 08:18:41 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:49.645 08:18:41 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:49.645 08:18:41 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:49.645 08:18:41 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.645 08:18:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:49.645 ************************************ 00:05:49.645 START TEST locking_app_on_unlocked_coremask 00:05:49.645 ************************************ 00:05:49.645 08:18:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:05:49.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.645 08:18:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60844 00:05:49.645 08:18:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:49.645 08:18:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60844 /var/tmp/spdk.sock 00:05:49.645 08:18:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60844 ']' 00:05:49.645 08:18:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.645 08:18:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:49.645 08:18:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.645 08:18:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:49.645 08:18:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:49.902 [2024-07-15 08:18:41.826884] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:49.902 [2024-07-15 08:18:41.827012] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60844 ] 00:05:49.902 [2024-07-15 08:18:41.965437] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:49.902 [2024-07-15 08:18:41.965526] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.160 [2024-07-15 08:18:42.089295] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.160 [2024-07-15 08:18:42.141398] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:50.727 08:18:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:50.727 08:18:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:50.727 08:18:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:50.727 08:18:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60860 00:05:50.727 08:18:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60860 /var/tmp/spdk2.sock 00:05:50.727 08:18:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60860 ']' 00:05:50.727 08:18:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:50.727 08:18:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:50.727 08:18:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:50.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:50.727 08:18:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:50.727 08:18:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:50.727 [2024-07-15 08:18:42.886325] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:50.727 [2024-07-15 08:18:42.886441] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60860 ] 00:05:50.985 [2024-07-15 08:18:43.032176] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.241 [2024-07-15 08:18:43.271976] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.241 [2024-07-15 08:18:43.376856] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:51.807 08:18:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:51.807 08:18:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:51.807 08:18:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60860 00:05:51.807 08:18:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60860 00:05:51.807 08:18:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:52.739 08:18:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60844 00:05:52.739 08:18:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 60844 ']' 00:05:52.739 08:18:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 60844 00:05:52.739 08:18:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:52.739 08:18:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:52.739 08:18:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60844 00:05:52.739 08:18:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:52.739 killing process with pid 60844 00:05:52.739 08:18:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:52.739 08:18:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60844' 00:05:52.739 08:18:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 60844 00:05:52.739 08:18:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 60844 00:05:53.305 08:18:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60860 00:05:53.305 08:18:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 60860 ']' 00:05:53.305 08:18:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 60860 00:05:53.305 08:18:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:53.305 08:18:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:53.305 08:18:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60860 00:05:53.305 killing process with pid 60860 00:05:53.305 08:18:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:53.305 08:18:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:53.305 08:18:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60860' 00:05:53.305 08:18:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 60860 00:05:53.305 08:18:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 60860 00:05:53.871 ************************************ 00:05:53.871 END TEST locking_app_on_unlocked_coremask 00:05:53.871 ************************************ 00:05:53.871 00:05:53.871 real 0m4.057s 00:05:53.871 user 0m4.597s 00:05:53.871 sys 0m1.066s 00:05:53.871 08:18:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:53.871 08:18:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:53.871 08:18:45 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:53.871 08:18:45 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:53.871 08:18:45 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:53.871 08:18:45 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.871 08:18:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:53.871 ************************************ 00:05:53.871 START TEST locking_app_on_locked_coremask 00:05:53.871 ************************************ 00:05:53.871 08:18:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:05:53.871 08:18:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60927 00:05:53.871 08:18:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60927 /var/tmp/spdk.sock 00:05:53.871 08:18:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:53.871 08:18:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60927 ']' 00:05:53.871 08:18:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.871 08:18:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:53.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.871 08:18:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.871 08:18:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:53.871 08:18:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:53.871 [2024-07-15 08:18:45.925039] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:53.871 [2024-07-15 08:18:45.925635] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60927 ] 00:05:54.127 [2024-07-15 08:18:46.063630] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.127 [2024-07-15 08:18:46.206881] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.127 [2024-07-15 08:18:46.266565] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:55.059 08:18:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:55.059 08:18:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:55.059 08:18:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60943 00:05:55.059 08:18:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:55.059 08:18:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60943 /var/tmp/spdk2.sock 00:05:55.059 08:18:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:55.059 08:18:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 60943 /var/tmp/spdk2.sock 00:05:55.059 08:18:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:55.059 08:18:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:55.059 08:18:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:55.059 08:18:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:55.059 08:18:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 60943 /var/tmp/spdk2.sock 00:05:55.059 08:18:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60943 ']' 00:05:55.059 08:18:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:55.059 08:18:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:55.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:55.059 08:18:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:55.059 08:18:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:55.059 08:18:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:55.059 [2024-07-15 08:18:47.016591] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:55.059 [2024-07-15 08:18:47.016751] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60943 ] 00:05:55.059 [2024-07-15 08:18:47.167186] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60927 has claimed it. 00:05:55.059 [2024-07-15 08:18:47.167275] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:55.634 ERROR: process (pid: 60943) is no longer running 00:05:55.634 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (60943) - No such process 00:05:55.634 08:18:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:55.634 08:18:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:55.634 08:18:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:55.634 08:18:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:55.634 08:18:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:55.634 08:18:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:55.634 08:18:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60927 00:05:55.634 08:18:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60927 00:05:55.634 08:18:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:56.200 08:18:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60927 00:05:56.200 08:18:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 60927 ']' 00:05:56.200 08:18:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 60927 00:05:56.200 08:18:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:56.200 08:18:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:56.200 08:18:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60927 00:05:56.200 08:18:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:56.200 killing process with pid 60927 00:05:56.200 08:18:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:56.200 08:18:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60927' 00:05:56.200 08:18:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 60927 00:05:56.200 08:18:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 60927 00:05:56.458 00:05:56.458 real 0m2.695s 00:05:56.458 user 0m3.175s 00:05:56.458 sys 0m0.648s 00:05:56.458 ************************************ 00:05:56.458 END TEST locking_app_on_locked_coremask 00:05:56.458 ************************************ 00:05:56.458 08:18:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.458 08:18:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:56.458 08:18:48 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:56.458 08:18:48 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:56.458 08:18:48 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:56.458 08:18:48 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.458 08:18:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:56.458 ************************************ 00:05:56.458 START TEST locking_overlapped_coremask 00:05:56.458 ************************************ 00:05:56.458 08:18:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:05:56.458 08:18:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60989 00:05:56.458 08:18:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:56.458 08:18:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60989 /var/tmp/spdk.sock 00:05:56.458 08:18:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 60989 ']' 00:05:56.458 08:18:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.458 08:18:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:56.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.458 08:18:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.458 08:18:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:56.458 08:18:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:56.764 [2024-07-15 08:18:48.677704] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:56.764 [2024-07-15 08:18:48.677850] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60989 ] 00:05:56.764 [2024-07-15 08:18:48.828917] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:57.023 [2024-07-15 08:18:48.959465] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.023 [2024-07-15 08:18:48.959576] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:57.023 [2024-07-15 08:18:48.959590] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.023 [2024-07-15 08:18:49.015965] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:57.589 08:18:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:57.589 08:18:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:57.589 08:18:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=61007 00:05:57.589 08:18:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 61007 /var/tmp/spdk2.sock 00:05:57.589 08:18:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:57.589 08:18:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:57.589 08:18:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 61007 /var/tmp/spdk2.sock 00:05:57.589 08:18:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:57.589 08:18:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:57.589 08:18:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:57.589 08:18:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:57.589 08:18:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 61007 /var/tmp/spdk2.sock 00:05:57.589 08:18:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 61007 ']' 00:05:57.589 08:18:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:57.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:57.589 08:18:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:57.589 08:18:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:57.589 08:18:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:57.589 08:18:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:57.589 [2024-07-15 08:18:49.745877] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:57.589 [2024-07-15 08:18:49.745990] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61007 ] 00:05:57.847 [2024-07-15 08:18:49.891430] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60989 has claimed it. 00:05:57.847 [2024-07-15 08:18:49.891510] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:58.413 ERROR: process (pid: 61007) is no longer running 00:05:58.414 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (61007) - No such process 00:05:58.414 08:18:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:58.414 08:18:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:58.414 08:18:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:58.414 08:18:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:58.414 08:18:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:58.414 08:18:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:58.414 08:18:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:58.414 08:18:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:58.414 08:18:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:58.414 08:18:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:58.414 08:18:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60989 00:05:58.414 08:18:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 60989 ']' 00:05:58.414 08:18:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 60989 00:05:58.414 08:18:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:05:58.414 08:18:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:58.414 08:18:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60989 00:05:58.414 08:18:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:58.414 killing process with pid 60989 00:05:58.414 08:18:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:58.414 08:18:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60989' 00:05:58.414 08:18:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 60989 00:05:58.414 08:18:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 60989 00:05:58.672 00:05:58.672 real 0m2.218s 00:05:58.672 user 0m6.069s 00:05:58.672 sys 0m0.432s 00:05:58.672 08:18:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:58.672 ************************************ 00:05:58.672 END TEST locking_overlapped_coremask 00:05:58.672 ************************************ 00:05:58.672 08:18:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:58.963 08:18:50 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:58.963 08:18:50 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:58.963 08:18:50 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:58.964 08:18:50 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.964 08:18:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:58.964 ************************************ 00:05:58.964 START TEST locking_overlapped_coremask_via_rpc 00:05:58.964 ************************************ 00:05:58.964 08:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:05:58.964 08:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=61047 00:05:58.964 08:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 61047 /var/tmp/spdk.sock 00:05:58.964 08:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:58.964 08:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 61047 ']' 00:05:58.964 08:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.964 08:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:58.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.964 08:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.964 08:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:58.964 08:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.964 [2024-07-15 08:18:50.925639] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:58.964 [2024-07-15 08:18:50.925737] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61047 ] 00:05:58.964 [2024-07-15 08:18:51.061476] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:58.964 [2024-07-15 08:18:51.061534] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:59.226 [2024-07-15 08:18:51.178691] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:59.226 [2024-07-15 08:18:51.178863] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:59.226 [2024-07-15 08:18:51.178867] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.226 [2024-07-15 08:18:51.231266] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:59.791 08:18:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:59.791 08:18:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:59.791 08:18:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=61065 00:05:59.791 08:18:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:59.791 08:18:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 61065 /var/tmp/spdk2.sock 00:05:59.791 08:18:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 61065 ']' 00:05:59.791 08:18:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:59.791 08:18:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:59.791 08:18:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:59.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:59.791 08:18:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:59.791 08:18:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.048 [2024-07-15 08:18:51.983813] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:00.048 [2024-07-15 08:18:51.983917] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61065 ] 00:06:00.048 [2024-07-15 08:18:52.131014] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:00.048 [2024-07-15 08:18:52.134769] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:00.305 [2024-07-15 08:18:52.371289] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:00.305 [2024-07-15 08:18:52.374849] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:00.305 [2024-07-15 08:18:52.374849] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:00.562 [2024-07-15 08:18:52.480511] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:00.820 08:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:00.820 08:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:00.820 08:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:00.820 08:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:00.820 08:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.820 08:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:00.820 08:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:00.820 08:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:00.820 08:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:00.820 08:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:00.820 08:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:00.820 08:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:00.820 08:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:00.820 08:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:00.820 08:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:00.820 08:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.820 [2024-07-15 08:18:52.961885] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61047 has claimed it. 00:06:00.820 request: 00:06:00.820 { 00:06:00.820 "method": "framework_enable_cpumask_locks", 00:06:00.820 "req_id": 1 00:06:00.820 } 00:06:00.820 Got JSON-RPC error response 00:06:00.820 response: 00:06:00.820 { 00:06:00.820 "code": -32603, 00:06:00.820 "message": "Failed to claim CPU core: 2" 00:06:00.820 } 00:06:00.820 08:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:00.820 08:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:00.820 08:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:00.820 08:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:00.820 08:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:00.820 08:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 61047 /var/tmp/spdk.sock 00:06:00.820 08:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 61047 ']' 00:06:00.820 08:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.820 08:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:00.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.820 08:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.820 08:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:00.820 08:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.078 08:18:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:01.078 08:18:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:01.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:01.078 08:18:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 61065 /var/tmp/spdk2.sock 00:06:01.078 08:18:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 61065 ']' 00:06:01.078 08:18:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:01.078 08:18:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:01.078 08:18:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:01.078 08:18:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:01.078 08:18:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.642 08:18:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:01.642 08:18:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:01.642 08:18:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:01.642 08:18:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:01.642 08:18:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:01.642 08:18:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:01.642 00:06:01.642 real 0m2.656s 00:06:01.642 user 0m1.386s 00:06:01.642 sys 0m0.197s 00:06:01.642 08:18:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.642 08:18:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.642 ************************************ 00:06:01.642 END TEST locking_overlapped_coremask_via_rpc 00:06:01.642 ************************************ 00:06:01.642 08:18:53 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:01.642 08:18:53 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:01.642 08:18:53 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61047 ]] 00:06:01.642 08:18:53 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61047 00:06:01.642 08:18:53 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 61047 ']' 00:06:01.642 08:18:53 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 61047 00:06:01.642 08:18:53 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:01.642 08:18:53 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:01.642 08:18:53 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61047 00:06:01.642 08:18:53 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:01.642 08:18:53 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:01.642 08:18:53 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61047' 00:06:01.642 killing process with pid 61047 00:06:01.642 08:18:53 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 61047 00:06:01.642 08:18:53 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 61047 00:06:01.900 08:18:53 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61065 ]] 00:06:01.900 08:18:53 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61065 00:06:01.900 08:18:53 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 61065 ']' 00:06:01.900 08:18:53 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 61065 00:06:01.900 08:18:53 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:01.900 08:18:53 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:01.900 08:18:53 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61065 00:06:01.900 08:18:53 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:01.900 08:18:53 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:01.900 08:18:53 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61065' 00:06:01.900 killing process with pid 61065 00:06:01.900 08:18:53 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 61065 00:06:01.900 08:18:53 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 61065 00:06:02.465 08:18:54 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:02.465 08:18:54 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:02.465 08:18:54 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61047 ]] 00:06:02.465 08:18:54 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61047 00:06:02.465 08:18:54 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 61047 ']' 00:06:02.465 08:18:54 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 61047 00:06:02.465 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (61047) - No such process 00:06:02.465 Process with pid 61047 is not found 00:06:02.465 08:18:54 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 61047 is not found' 00:06:02.465 08:18:54 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61065 ]] 00:06:02.465 08:18:54 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61065 00:06:02.465 08:18:54 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 61065 ']' 00:06:02.465 08:18:54 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 61065 00:06:02.465 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (61065) - No such process 00:06:02.465 Process with pid 61065 is not found 00:06:02.465 08:18:54 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 61065 is not found' 00:06:02.465 08:18:54 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:02.465 00:06:02.465 real 0m20.828s 00:06:02.465 user 0m36.369s 00:06:02.465 sys 0m5.397s 00:06:02.465 08:18:54 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:02.465 08:18:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:02.465 ************************************ 00:06:02.466 END TEST cpu_locks 00:06:02.466 ************************************ 00:06:02.466 08:18:54 event -- common/autotest_common.sh@1142 -- # return 0 00:06:02.466 00:06:02.466 real 0m49.353s 00:06:02.466 user 1m35.402s 00:06:02.466 sys 0m9.085s 00:06:02.466 08:18:54 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:02.466 08:18:54 event -- common/autotest_common.sh@10 -- # set +x 00:06:02.466 ************************************ 00:06:02.466 END TEST event 00:06:02.466 ************************************ 00:06:02.466 08:18:54 -- common/autotest_common.sh@1142 -- # return 0 00:06:02.466 08:18:54 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:02.466 08:18:54 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:02.466 08:18:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.466 08:18:54 -- common/autotest_common.sh@10 -- # set +x 00:06:02.466 ************************************ 00:06:02.466 START TEST thread 00:06:02.466 ************************************ 00:06:02.466 08:18:54 thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:02.466 * Looking for test storage... 00:06:02.466 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:02.466 08:18:54 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:02.466 08:18:54 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:02.466 08:18:54 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.466 08:18:54 thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.466 ************************************ 00:06:02.466 START TEST thread_poller_perf 00:06:02.466 ************************************ 00:06:02.466 08:18:54 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:02.466 [2024-07-15 08:18:54.590088] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:02.466 [2024-07-15 08:18:54.590184] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61193 ] 00:06:02.723 [2024-07-15 08:18:54.725520] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.723 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:02.723 [2024-07-15 08:18:54.841969] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.197 ====================================== 00:06:04.197 busy:2209363454 (cyc) 00:06:04.197 total_run_count: 318000 00:06:04.197 tsc_hz: 2200000000 (cyc) 00:06:04.197 ====================================== 00:06:04.197 poller_cost: 6947 (cyc), 3157 (nsec) 00:06:04.197 00:06:04.197 real 0m1.362s 00:06:04.197 user 0m1.207s 00:06:04.197 sys 0m0.048s 00:06:04.197 08:18:55 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:04.197 08:18:55 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:04.197 ************************************ 00:06:04.197 END TEST thread_poller_perf 00:06:04.197 ************************************ 00:06:04.197 08:18:55 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:04.197 08:18:55 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:04.197 08:18:55 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:04.197 08:18:55 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.197 08:18:55 thread -- common/autotest_common.sh@10 -- # set +x 00:06:04.197 ************************************ 00:06:04.197 START TEST thread_poller_perf 00:06:04.197 ************************************ 00:06:04.197 08:18:55 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:04.197 [2024-07-15 08:18:56.000946] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:04.197 [2024-07-15 08:18:56.001059] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61223 ] 00:06:04.197 [2024-07-15 08:18:56.141024] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.197 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:04.197 [2024-07-15 08:18:56.266023] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.574 ====================================== 00:06:05.574 busy:2201900514 (cyc) 00:06:05.574 total_run_count: 4216000 00:06:05.574 tsc_hz: 2200000000 (cyc) 00:06:05.574 ====================================== 00:06:05.574 poller_cost: 522 (cyc), 237 (nsec) 00:06:05.574 00:06:05.574 real 0m1.371s 00:06:05.574 user 0m1.210s 00:06:05.574 sys 0m0.052s 00:06:05.574 08:18:57 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:05.574 ************************************ 00:06:05.574 END TEST thread_poller_perf 00:06:05.574 ************************************ 00:06:05.574 08:18:57 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:05.574 08:18:57 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:05.574 08:18:57 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:05.574 00:06:05.574 real 0m2.908s 00:06:05.574 user 0m2.485s 00:06:05.574 sys 0m0.207s 00:06:05.574 08:18:57 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:05.574 08:18:57 thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.574 ************************************ 00:06:05.574 END TEST thread 00:06:05.574 ************************************ 00:06:05.574 08:18:57 -- common/autotest_common.sh@1142 -- # return 0 00:06:05.574 08:18:57 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:05.574 08:18:57 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:05.574 08:18:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.574 08:18:57 -- common/autotest_common.sh@10 -- # set +x 00:06:05.574 ************************************ 00:06:05.574 START TEST accel 00:06:05.574 ************************************ 00:06:05.574 08:18:57 accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:05.574 * Looking for test storage... 00:06:05.574 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:05.574 08:18:57 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:05.574 08:18:57 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:05.574 08:18:57 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:05.574 08:18:57 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=61303 00:06:05.574 08:18:57 accel -- accel/accel.sh@63 -- # waitforlisten 61303 00:06:05.574 08:18:57 accel -- common/autotest_common.sh@829 -- # '[' -z 61303 ']' 00:06:05.574 08:18:57 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.574 08:18:57 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:05.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.574 08:18:57 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.574 08:18:57 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:05.574 08:18:57 accel -- common/autotest_common.sh@10 -- # set +x 00:06:05.574 08:18:57 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:05.574 08:18:57 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:05.574 08:18:57 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:05.574 08:18:57 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:05.574 08:18:57 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:05.574 08:18:57 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:05.574 08:18:57 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:05.574 08:18:57 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:05.574 08:18:57 accel -- accel/accel.sh@41 -- # jq -r . 00:06:05.574 [2024-07-15 08:18:57.599696] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:05.574 [2024-07-15 08:18:57.599824] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61303 ] 00:06:05.574 [2024-07-15 08:18:57.738177] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.832 [2024-07-15 08:18:57.867591] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.832 [2024-07-15 08:18:57.926249] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:06.399 08:18:58 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:06.399 08:18:58 accel -- common/autotest_common.sh@862 -- # return 0 00:06:06.399 08:18:58 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:06.399 08:18:58 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:06.399 08:18:58 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:06.399 08:18:58 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:06.399 08:18:58 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:06.399 08:18:58 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:06.399 08:18:58 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.399 08:18:58 accel -- common/autotest_common.sh@10 -- # set +x 00:06:06.399 08:18:58 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:06.399 08:18:58 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.657 08:18:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:06.657 08:18:58 accel -- accel/accel.sh@72 -- # IFS== 00:06:06.657 08:18:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:06.657 08:18:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:06.657 08:18:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:06.657 08:18:58 accel -- accel/accel.sh@72 -- # IFS== 00:06:06.657 08:18:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:06.657 08:18:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:06.657 08:18:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:06.657 08:18:58 accel -- accel/accel.sh@72 -- # IFS== 00:06:06.657 08:18:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:06.657 08:18:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:06.657 08:18:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:06.657 08:18:58 accel -- accel/accel.sh@72 -- # IFS== 00:06:06.657 08:18:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:06.657 08:18:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:06.657 08:18:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:06.657 08:18:58 accel -- accel/accel.sh@72 -- # IFS== 00:06:06.657 08:18:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:06.657 08:18:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:06.657 08:18:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:06.657 08:18:58 accel -- accel/accel.sh@72 -- # IFS== 00:06:06.657 08:18:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:06.657 08:18:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:06.657 08:18:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:06.657 08:18:58 accel -- accel/accel.sh@72 -- # IFS== 00:06:06.657 08:18:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:06.657 08:18:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:06.657 08:18:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:06.657 08:18:58 accel -- accel/accel.sh@72 -- # IFS== 00:06:06.657 08:18:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:06.657 08:18:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:06.657 08:18:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:06.657 08:18:58 accel -- accel/accel.sh@72 -- # IFS== 00:06:06.657 08:18:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:06.657 08:18:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:06.657 08:18:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:06.657 08:18:58 accel -- accel/accel.sh@72 -- # IFS== 00:06:06.657 08:18:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:06.657 08:18:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:06.657 08:18:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:06.657 08:18:58 accel -- accel/accel.sh@72 -- # IFS== 00:06:06.657 08:18:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:06.657 08:18:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:06.657 08:18:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:06.657 08:18:58 accel -- accel/accel.sh@72 -- # IFS== 00:06:06.657 08:18:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:06.657 08:18:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:06.657 08:18:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:06.657 08:18:58 accel -- accel/accel.sh@72 -- # IFS== 00:06:06.657 08:18:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:06.657 08:18:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:06.657 08:18:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:06.657 08:18:58 accel -- accel/accel.sh@72 -- # IFS== 00:06:06.657 08:18:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:06.657 08:18:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:06.657 08:18:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:06.657 08:18:58 accel -- accel/accel.sh@72 -- # IFS== 00:06:06.657 08:18:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:06.657 08:18:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:06.657 08:18:58 accel -- accel/accel.sh@75 -- # killprocess 61303 00:06:06.657 08:18:58 accel -- common/autotest_common.sh@948 -- # '[' -z 61303 ']' 00:06:06.657 08:18:58 accel -- common/autotest_common.sh@952 -- # kill -0 61303 00:06:06.657 08:18:58 accel -- common/autotest_common.sh@953 -- # uname 00:06:06.657 08:18:58 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:06.657 08:18:58 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61303 00:06:06.657 08:18:58 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:06.657 killing process with pid 61303 00:06:06.657 08:18:58 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:06.657 08:18:58 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61303' 00:06:06.657 08:18:58 accel -- common/autotest_common.sh@967 -- # kill 61303 00:06:06.657 08:18:58 accel -- common/autotest_common.sh@972 -- # wait 61303 00:06:06.915 08:18:59 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:06.915 08:18:59 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:06.915 08:18:59 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:06.915 08:18:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.915 08:18:59 accel -- common/autotest_common.sh@10 -- # set +x 00:06:06.915 08:18:59 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:06:06.915 08:18:59 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:06.915 08:18:59 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:06.915 08:18:59 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:06.915 08:18:59 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:06.915 08:18:59 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:06.915 08:18:59 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:06.915 08:18:59 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:06.915 08:18:59 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:06.915 08:18:59 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:06.915 08:18:59 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:06.915 08:18:59 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:07.173 08:18:59 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:07.173 08:18:59 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:07.173 08:18:59 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:07.173 08:18:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.173 08:18:59 accel -- common/autotest_common.sh@10 -- # set +x 00:06:07.173 ************************************ 00:06:07.173 START TEST accel_missing_filename 00:06:07.173 ************************************ 00:06:07.173 08:18:59 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:06:07.173 08:18:59 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:07.173 08:18:59 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:07.173 08:18:59 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:07.173 08:18:59 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:07.173 08:18:59 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:07.173 08:18:59 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:07.174 08:18:59 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:07.174 08:18:59 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:07.174 08:18:59 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:07.174 08:18:59 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:07.174 08:18:59 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:07.174 08:18:59 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:07.174 08:18:59 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:07.174 08:18:59 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:07.174 08:18:59 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:07.174 08:18:59 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:07.174 [2024-07-15 08:18:59.138712] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:07.174 [2024-07-15 08:18:59.138833] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61349 ] 00:06:07.174 [2024-07-15 08:18:59.279223] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.433 [2024-07-15 08:18:59.408636] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.433 [2024-07-15 08:18:59.466845] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:07.433 [2024-07-15 08:18:59.544351] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:07.700 A filename is required. 00:06:07.700 08:18:59 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:07.700 08:18:59 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:07.700 08:18:59 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:07.700 08:18:59 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:07.700 08:18:59 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:07.700 08:18:59 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:07.700 00:06:07.700 real 0m0.524s 00:06:07.700 user 0m0.339s 00:06:07.700 sys 0m0.126s 00:06:07.700 08:18:59 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:07.700 08:18:59 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:07.700 ************************************ 00:06:07.700 END TEST accel_missing_filename 00:06:07.700 ************************************ 00:06:07.700 08:18:59 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:07.700 08:18:59 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:07.700 08:18:59 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:07.700 08:18:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.700 08:18:59 accel -- common/autotest_common.sh@10 -- # set +x 00:06:07.700 ************************************ 00:06:07.700 START TEST accel_compress_verify 00:06:07.700 ************************************ 00:06:07.700 08:18:59 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:07.700 08:18:59 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:07.700 08:18:59 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:07.700 08:18:59 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:07.700 08:18:59 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:07.700 08:18:59 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:07.700 08:18:59 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:07.700 08:18:59 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:07.700 08:18:59 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:07.700 08:18:59 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:07.700 08:18:59 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:07.700 08:18:59 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:07.700 08:18:59 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:07.700 08:18:59 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:07.700 08:18:59 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:07.700 08:18:59 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:07.700 08:18:59 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:07.700 [2024-07-15 08:18:59.707202] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:07.700 [2024-07-15 08:18:59.708026] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61379 ] 00:06:07.700 [2024-07-15 08:18:59.848578] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.999 [2024-07-15 08:18:59.968504] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.999 [2024-07-15 08:19:00.025637] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:07.999 [2024-07-15 08:19:00.100958] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:08.257 00:06:08.257 Compression does not support the verify option, aborting. 00:06:08.257 08:19:00 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:08.257 08:19:00 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:08.257 08:19:00 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:08.257 08:19:00 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:08.257 08:19:00 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:08.257 08:19:00 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:08.257 00:06:08.257 real 0m0.511s 00:06:08.257 user 0m0.331s 00:06:08.257 sys 0m0.117s 00:06:08.257 08:19:00 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:08.257 ************************************ 00:06:08.257 08:19:00 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:08.257 END TEST accel_compress_verify 00:06:08.257 ************************************ 00:06:08.258 08:19:00 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:08.258 08:19:00 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:08.258 08:19:00 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:08.258 08:19:00 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.258 08:19:00 accel -- common/autotest_common.sh@10 -- # set +x 00:06:08.258 ************************************ 00:06:08.258 START TEST accel_wrong_workload 00:06:08.258 ************************************ 00:06:08.258 08:19:00 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:06:08.258 08:19:00 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:08.258 08:19:00 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:08.258 08:19:00 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:08.258 08:19:00 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:08.258 08:19:00 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:08.258 08:19:00 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:08.258 08:19:00 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:08.258 08:19:00 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:08.258 08:19:00 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:08.258 08:19:00 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:08.258 08:19:00 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:08.258 08:19:00 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:08.258 08:19:00 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:08.258 08:19:00 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:08.258 08:19:00 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:08.258 08:19:00 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:08.258 Unsupported workload type: foobar 00:06:08.258 [2024-07-15 08:19:00.264100] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:08.258 accel_perf options: 00:06:08.258 [-h help message] 00:06:08.258 [-q queue depth per core] 00:06:08.258 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:08.258 [-T number of threads per core 00:06:08.258 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:08.258 [-t time in seconds] 00:06:08.258 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:08.258 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:08.258 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:08.258 [-l for compress/decompress workloads, name of uncompressed input file 00:06:08.258 [-S for crc32c workload, use this seed value (default 0) 00:06:08.258 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:08.258 [-f for fill workload, use this BYTE value (default 255) 00:06:08.258 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:08.258 [-y verify result if this switch is on] 00:06:08.258 [-a tasks to allocate per core (default: same value as -q)] 00:06:08.258 Can be used to spread operations across a wider range of memory. 00:06:08.258 08:19:00 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:08.258 08:19:00 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:08.258 08:19:00 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:08.258 08:19:00 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:08.258 00:06:08.258 real 0m0.028s 00:06:08.258 user 0m0.014s 00:06:08.258 sys 0m0.011s 00:06:08.258 08:19:00 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:08.258 08:19:00 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:08.258 ************************************ 00:06:08.258 END TEST accel_wrong_workload 00:06:08.258 ************************************ 00:06:08.258 08:19:00 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:08.258 08:19:00 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:08.258 08:19:00 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:08.258 08:19:00 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.258 08:19:00 accel -- common/autotest_common.sh@10 -- # set +x 00:06:08.258 ************************************ 00:06:08.258 START TEST accel_negative_buffers 00:06:08.258 ************************************ 00:06:08.258 08:19:00 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:08.258 08:19:00 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:08.258 08:19:00 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:08.258 08:19:00 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:08.258 08:19:00 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:08.258 08:19:00 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:08.258 08:19:00 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:08.258 08:19:00 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:08.258 08:19:00 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:08.258 08:19:00 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:08.258 08:19:00 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:08.258 08:19:00 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:08.258 08:19:00 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:08.258 08:19:00 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:08.258 08:19:00 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:08.258 08:19:00 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:08.258 08:19:00 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:08.258 -x option must be non-negative. 00:06:08.258 [2024-07-15 08:19:00.343575] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:08.258 accel_perf options: 00:06:08.258 [-h help message] 00:06:08.258 [-q queue depth per core] 00:06:08.258 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:08.258 [-T number of threads per core 00:06:08.258 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:08.258 [-t time in seconds] 00:06:08.258 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:08.258 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:08.258 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:08.258 [-l for compress/decompress workloads, name of uncompressed input file 00:06:08.258 [-S for crc32c workload, use this seed value (default 0) 00:06:08.258 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:08.258 [-f for fill workload, use this BYTE value (default 255) 00:06:08.258 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:08.258 [-y verify result if this switch is on] 00:06:08.258 [-a tasks to allocate per core (default: same value as -q)] 00:06:08.258 Can be used to spread operations across a wider range of memory. 00:06:08.258 08:19:00 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:08.258 08:19:00 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:08.258 08:19:00 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:08.258 08:19:00 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:08.258 00:06:08.258 real 0m0.032s 00:06:08.258 user 0m0.017s 00:06:08.258 sys 0m0.014s 00:06:08.258 08:19:00 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:08.258 08:19:00 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:08.258 ************************************ 00:06:08.258 END TEST accel_negative_buffers 00:06:08.258 ************************************ 00:06:08.258 08:19:00 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:08.258 08:19:00 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:08.258 08:19:00 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:08.258 08:19:00 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.258 08:19:00 accel -- common/autotest_common.sh@10 -- # set +x 00:06:08.258 ************************************ 00:06:08.258 START TEST accel_crc32c 00:06:08.258 ************************************ 00:06:08.258 08:19:00 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:08.258 08:19:00 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:08.258 08:19:00 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:08.258 08:19:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.258 08:19:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.258 08:19:00 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:08.258 08:19:00 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:08.258 08:19:00 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:08.258 08:19:00 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:08.258 08:19:00 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:08.258 08:19:00 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:08.258 08:19:00 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:08.258 08:19:00 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:08.259 08:19:00 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:08.259 08:19:00 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:08.259 [2024-07-15 08:19:00.422810] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:08.259 [2024-07-15 08:19:00.422956] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61432 ] 00:06:08.517 [2024-07-15 08:19:00.568965] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.775 [2024-07-15 08:19:00.692295] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.775 08:19:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:08.775 08:19:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.775 08:19:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.775 08:19:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.775 08:19:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:08.775 08:19:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.775 08:19:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.775 08:19:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.775 08:19:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:08.775 08:19:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.775 08:19:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.775 08:19:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.775 08:19:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:08.775 08:19:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.775 08:19:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.775 08:19:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.775 08:19:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:08.775 08:19:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.775 08:19:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.775 08:19:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.775 08:19:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:08.775 08:19:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.775 08:19:00 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:08.775 08:19:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.775 08:19:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.775 08:19:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:08.775 08:19:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.775 08:19:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.775 08:19:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.775 08:19:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:08.775 08:19:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.775 08:19:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.775 08:19:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.775 08:19:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:08.775 08:19:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.775 08:19:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.775 08:19:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.775 08:19:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:08.775 08:19:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.775 08:19:00 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:08.775 08:19:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.775 08:19:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.775 08:19:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:08.775 08:19:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.775 08:19:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.775 08:19:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.775 08:19:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:08.775 08:19:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.775 08:19:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.775 08:19:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.775 08:19:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:08.775 08:19:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.775 08:19:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.775 08:19:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.775 08:19:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:08.775 08:19:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.775 08:19:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.775 08:19:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.775 08:19:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:08.775 08:19:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.775 08:19:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.775 08:19:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.775 08:19:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:08.775 08:19:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.775 08:19:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.775 08:19:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.775 08:19:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:08.775 08:19:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.775 08:19:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.775 08:19:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:10.150 08:19:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:10.150 08:19:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:10.150 08:19:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:10.150 08:19:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:10.150 08:19:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:10.150 08:19:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:10.150 08:19:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:10.150 08:19:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:10.150 08:19:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:10.150 08:19:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:10.150 08:19:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:10.150 08:19:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:10.150 08:19:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:10.150 08:19:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:10.150 08:19:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:10.150 08:19:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:10.150 08:19:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:10.150 08:19:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:10.150 08:19:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:10.150 08:19:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:10.150 08:19:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:10.150 08:19:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:10.150 08:19:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:10.150 08:19:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:10.150 08:19:01 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:10.150 08:19:01 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:10.150 08:19:01 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:10.150 00:06:10.150 real 0m1.525s 00:06:10.150 user 0m0.014s 00:06:10.150 sys 0m0.003s 00:06:10.150 08:19:01 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:10.150 08:19:01 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:10.150 ************************************ 00:06:10.150 END TEST accel_crc32c 00:06:10.150 ************************************ 00:06:10.150 08:19:01 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:10.150 08:19:01 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:10.150 08:19:01 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:10.150 08:19:01 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.150 08:19:01 accel -- common/autotest_common.sh@10 -- # set +x 00:06:10.150 ************************************ 00:06:10.150 START TEST accel_crc32c_C2 00:06:10.150 ************************************ 00:06:10.150 08:19:01 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:10.150 08:19:01 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:10.150 08:19:01 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:10.150 08:19:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.150 08:19:01 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:10.150 08:19:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.150 08:19:01 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:10.150 08:19:01 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:10.150 08:19:01 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:10.150 08:19:01 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:10.150 08:19:01 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:10.150 08:19:01 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:10.150 08:19:01 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:10.150 08:19:01 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:10.150 08:19:01 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:10.150 [2024-07-15 08:19:01.989849] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:10.150 [2024-07-15 08:19:01.989985] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61472 ] 00:06:10.150 [2024-07-15 08:19:02.131259] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.150 [2024-07-15 08:19:02.248008] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.150 08:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:10.150 08:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.150 08:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.150 08:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.150 08:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:10.150 08:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.150 08:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.150 08:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.150 08:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:10.150 08:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.150 08:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.150 08:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.150 08:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:10.150 08:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.150 08:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.150 08:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.150 08:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:10.150 08:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.150 08:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.150 08:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.151 08:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:10.151 08:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.151 08:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:10.151 08:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.151 08:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.151 08:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:10.151 08:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.151 08:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.151 08:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.151 08:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:10.151 08:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.151 08:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.151 08:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.151 08:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:10.151 08:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.151 08:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.151 08:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.151 08:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:10.151 08:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.151 08:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:10.151 08:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.151 08:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.151 08:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:10.151 08:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.151 08:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.151 08:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.151 08:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:10.151 08:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.151 08:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.151 08:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.151 08:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:10.151 08:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.151 08:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.151 08:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.151 08:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:10.151 08:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.151 08:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.151 08:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.151 08:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:10.151 08:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.151 08:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.151 08:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.151 08:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:10.151 08:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.151 08:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.151 08:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.151 08:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:10.151 08:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.151 08:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.151 08:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:11.522 08:19:03 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:11.522 08:19:03 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.522 08:19:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:11.522 08:19:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:11.522 08:19:03 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:11.522 08:19:03 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.522 08:19:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:11.522 08:19:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:11.522 08:19:03 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:11.522 08:19:03 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.522 08:19:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:11.522 08:19:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:11.522 08:19:03 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:11.522 08:19:03 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.522 08:19:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:11.522 08:19:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:11.522 08:19:03 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:11.522 08:19:03 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.522 08:19:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:11.522 08:19:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:11.522 08:19:03 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:11.522 08:19:03 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.522 08:19:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:11.522 08:19:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:11.522 08:19:03 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:11.522 08:19:03 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:11.522 08:19:03 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:11.522 00:06:11.522 real 0m1.504s 00:06:11.522 user 0m0.014s 00:06:11.522 sys 0m0.002s 00:06:11.522 08:19:03 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:11.522 ************************************ 00:06:11.522 END TEST accel_crc32c_C2 00:06:11.522 08:19:03 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:11.522 ************************************ 00:06:11.522 08:19:03 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:11.522 08:19:03 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:11.522 08:19:03 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:11.523 08:19:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.523 08:19:03 accel -- common/autotest_common.sh@10 -- # set +x 00:06:11.523 ************************************ 00:06:11.523 START TEST accel_copy 00:06:11.523 ************************************ 00:06:11.523 08:19:03 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:06:11.523 08:19:03 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:11.523 08:19:03 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:11.523 08:19:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:11.523 08:19:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:11.523 08:19:03 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:11.523 08:19:03 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:11.523 08:19:03 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:11.523 08:19:03 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:11.523 08:19:03 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:11.523 08:19:03 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:11.523 08:19:03 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:11.523 08:19:03 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:11.523 08:19:03 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:11.523 08:19:03 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:11.523 [2024-07-15 08:19:03.544973] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:11.523 [2024-07-15 08:19:03.545106] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61501 ] 00:06:11.780 [2024-07-15 08:19:03.693599] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.780 [2024-07-15 08:19:03.810755] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.780 08:19:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:11.780 08:19:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:11.780 08:19:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:11.780 08:19:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:11.780 08:19:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:11.780 08:19:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:11.780 08:19:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:11.780 08:19:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:11.780 08:19:03 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:11.780 08:19:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:11.780 08:19:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:11.780 08:19:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:11.780 08:19:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:11.780 08:19:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:11.780 08:19:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:11.780 08:19:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:11.780 08:19:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:11.780 08:19:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:11.780 08:19:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:11.780 08:19:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:11.780 08:19:03 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:11.780 08:19:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:11.780 08:19:03 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:11.780 08:19:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:11.780 08:19:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:11.780 08:19:03 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:11.780 08:19:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:11.781 08:19:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:11.781 08:19:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:11.781 08:19:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:11.781 08:19:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:11.781 08:19:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:11.781 08:19:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:11.781 08:19:03 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:11.781 08:19:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:11.781 08:19:03 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:11.781 08:19:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:11.781 08:19:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:11.781 08:19:03 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:11.781 08:19:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:11.781 08:19:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:11.781 08:19:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:11.781 08:19:03 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:11.781 08:19:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:11.781 08:19:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:11.781 08:19:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:11.781 08:19:03 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:11.781 08:19:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:11.781 08:19:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:11.781 08:19:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:11.781 08:19:03 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:11.781 08:19:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:11.781 08:19:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:11.781 08:19:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:11.781 08:19:03 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:11.781 08:19:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:11.781 08:19:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:11.781 08:19:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:11.781 08:19:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:11.781 08:19:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:11.781 08:19:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:11.781 08:19:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:11.781 08:19:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:11.781 08:19:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:11.781 08:19:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:11.781 08:19:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:13.157 08:19:05 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:13.157 08:19:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:13.157 08:19:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:13.157 08:19:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:13.157 08:19:05 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:13.157 08:19:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:13.157 08:19:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:13.157 08:19:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:13.157 08:19:05 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:13.157 08:19:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:13.157 08:19:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:13.157 08:19:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:13.157 08:19:05 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:13.157 08:19:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:13.157 08:19:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:13.157 08:19:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:13.157 08:19:05 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:13.157 ************************************ 00:06:13.157 END TEST accel_copy 00:06:13.157 ************************************ 00:06:13.157 08:19:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:13.157 08:19:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:13.157 08:19:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:13.157 08:19:05 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:13.157 08:19:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:13.157 08:19:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:13.157 08:19:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:13.157 08:19:05 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:13.157 08:19:05 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:13.157 08:19:05 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:13.157 00:06:13.157 real 0m1.520s 00:06:13.157 user 0m1.304s 00:06:13.157 sys 0m0.124s 00:06:13.157 08:19:05 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:13.157 08:19:05 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:13.157 08:19:05 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:13.157 08:19:05 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:13.157 08:19:05 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:13.157 08:19:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.157 08:19:05 accel -- common/autotest_common.sh@10 -- # set +x 00:06:13.157 ************************************ 00:06:13.157 START TEST accel_fill 00:06:13.157 ************************************ 00:06:13.157 08:19:05 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:13.157 08:19:05 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:13.157 08:19:05 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:13.157 08:19:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:13.157 08:19:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:13.157 08:19:05 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:13.157 08:19:05 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:13.157 08:19:05 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:13.157 08:19:05 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:13.157 08:19:05 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:13.157 08:19:05 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:13.157 08:19:05 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:13.157 08:19:05 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:13.157 08:19:05 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:13.157 08:19:05 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:13.157 [2024-07-15 08:19:05.099874] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:13.157 [2024-07-15 08:19:05.099964] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61541 ] 00:06:13.157 [2024-07-15 08:19:05.227506] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.416 [2024-07-15 08:19:05.344605] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.416 08:19:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:13.416 08:19:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:13.416 08:19:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:13.416 08:19:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:13.416 08:19:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:13.416 08:19:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:13.416 08:19:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:13.416 08:19:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:13.416 08:19:05 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:13.416 08:19:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:13.416 08:19:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:13.416 08:19:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:13.416 08:19:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:13.416 08:19:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:13.416 08:19:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:13.416 08:19:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:13.416 08:19:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:13.416 08:19:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:13.416 08:19:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:13.417 08:19:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:13.417 08:19:05 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:13.417 08:19:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:13.417 08:19:05 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:13.417 08:19:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:13.417 08:19:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:13.417 08:19:05 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:13.417 08:19:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:13.417 08:19:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:13.417 08:19:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:13.417 08:19:05 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:13.417 08:19:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:13.417 08:19:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:13.417 08:19:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:13.417 08:19:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:13.417 08:19:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:13.417 08:19:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:13.417 08:19:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:13.417 08:19:05 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:13.417 08:19:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:13.417 08:19:05 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:13.417 08:19:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:13.417 08:19:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:13.417 08:19:05 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:13.417 08:19:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:13.417 08:19:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:13.417 08:19:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:13.417 08:19:05 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:13.417 08:19:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:13.417 08:19:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:13.417 08:19:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:13.417 08:19:05 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:13.417 08:19:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:13.417 08:19:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:13.417 08:19:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:13.417 08:19:05 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:13.417 08:19:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:13.417 08:19:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:13.417 08:19:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:13.417 08:19:05 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:13.417 08:19:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:13.417 08:19:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:13.417 08:19:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:13.417 08:19:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:13.417 08:19:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:13.417 08:19:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:13.417 08:19:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:13.417 08:19:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:13.417 08:19:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:13.417 08:19:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:13.417 08:19:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:14.793 08:19:06 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:14.793 08:19:06 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:14.793 08:19:06 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:14.793 08:19:06 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:14.793 08:19:06 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:14.793 08:19:06 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:14.793 08:19:06 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:14.793 08:19:06 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:14.793 08:19:06 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:14.793 08:19:06 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:14.793 08:19:06 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:14.793 08:19:06 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:14.793 08:19:06 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:14.793 08:19:06 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:14.793 08:19:06 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:14.793 08:19:06 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:14.793 08:19:06 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:14.793 08:19:06 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:14.793 08:19:06 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:14.793 08:19:06 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:14.793 08:19:06 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:14.793 08:19:06 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:14.793 08:19:06 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:14.793 08:19:06 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:14.793 08:19:06 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:14.793 08:19:06 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:14.793 08:19:06 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:14.793 00:06:14.793 real 0m1.487s 00:06:14.793 user 0m1.270s 00:06:14.793 sys 0m0.118s 00:06:14.793 08:19:06 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:14.793 08:19:06 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:14.793 ************************************ 00:06:14.793 END TEST accel_fill 00:06:14.793 ************************************ 00:06:14.793 08:19:06 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:14.793 08:19:06 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:14.793 08:19:06 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:14.793 08:19:06 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.793 08:19:06 accel -- common/autotest_common.sh@10 -- # set +x 00:06:14.793 ************************************ 00:06:14.793 START TEST accel_copy_crc32c 00:06:14.793 ************************************ 00:06:14.793 08:19:06 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:06:14.793 08:19:06 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:14.793 08:19:06 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:14.793 08:19:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:14.793 08:19:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:14.793 08:19:06 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:14.793 08:19:06 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:14.793 08:19:06 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:14.793 08:19:06 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:14.793 08:19:06 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:14.793 08:19:06 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:14.793 08:19:06 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:14.793 08:19:06 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:14.793 08:19:06 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:14.793 08:19:06 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:14.793 [2024-07-15 08:19:06.641989] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:14.793 [2024-07-15 08:19:06.642084] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61576 ] 00:06:14.793 [2024-07-15 08:19:06.779786] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.793 [2024-07-15 08:19:06.909866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.052 08:19:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:15.052 08:19:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.052 08:19:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.052 08:19:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.052 08:19:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:15.052 08:19:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.052 08:19:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.052 08:19:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.052 08:19:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:15.052 08:19:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.052 08:19:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.052 08:19:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.052 08:19:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:15.052 08:19:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.052 08:19:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.052 08:19:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.052 08:19:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:15.052 08:19:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.052 08:19:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.052 08:19:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.052 08:19:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:15.052 08:19:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.052 08:19:06 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:15.052 08:19:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.052 08:19:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.052 08:19:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:15.052 08:19:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.052 08:19:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.052 08:19:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.052 08:19:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:15.052 08:19:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.052 08:19:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.052 08:19:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.052 08:19:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:15.052 08:19:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.052 08:19:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.052 08:19:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.052 08:19:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:15.052 08:19:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.052 08:19:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.052 08:19:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.052 08:19:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:15.052 08:19:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.052 08:19:06 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:15.052 08:19:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.052 08:19:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.052 08:19:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:15.052 08:19:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.052 08:19:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.052 08:19:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.052 08:19:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:15.052 08:19:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.053 08:19:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.053 08:19:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.053 08:19:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:15.053 08:19:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.053 08:19:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.053 08:19:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.053 08:19:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:15.053 08:19:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.053 08:19:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.053 08:19:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.053 08:19:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:15.053 08:19:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.053 08:19:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.053 08:19:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.053 08:19:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:15.053 08:19:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.053 08:19:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.053 08:19:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.053 08:19:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:15.053 08:19:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.053 08:19:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.053 08:19:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.988 08:19:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:15.988 08:19:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.988 08:19:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.988 08:19:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.988 08:19:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:15.988 08:19:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.988 08:19:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.988 08:19:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.988 08:19:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:15.988 08:19:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.988 08:19:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.988 08:19:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.988 08:19:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:15.988 08:19:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.988 08:19:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.988 08:19:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.988 08:19:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:15.988 08:19:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.988 08:19:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.988 08:19:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.988 08:19:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:15.988 08:19:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.988 08:19:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.988 08:19:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.988 08:19:08 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:15.988 08:19:08 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:15.988 08:19:08 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:15.988 00:06:15.988 real 0m1.527s 00:06:15.988 user 0m1.317s 00:06:15.988 sys 0m0.114s 00:06:15.988 08:19:08 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:15.988 08:19:08 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:15.988 ************************************ 00:06:15.988 END TEST accel_copy_crc32c 00:06:15.988 ************************************ 00:06:16.247 08:19:08 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:16.247 08:19:08 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:16.247 08:19:08 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:16.247 08:19:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.247 08:19:08 accel -- common/autotest_common.sh@10 -- # set +x 00:06:16.247 ************************************ 00:06:16.247 START TEST accel_copy_crc32c_C2 00:06:16.247 ************************************ 00:06:16.247 08:19:08 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:16.247 08:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:16.247 08:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:16.247 08:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.247 08:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:16.247 08:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.247 08:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:16.247 08:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:16.247 08:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:16.247 08:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:16.247 08:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:16.247 08:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:16.247 08:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:16.247 08:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:16.247 08:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:16.247 [2024-07-15 08:19:08.221251] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:16.247 [2024-07-15 08:19:08.221357] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61610 ] 00:06:16.247 [2024-07-15 08:19:08.362374] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.506 [2024-07-15 08:19:08.491961] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.506 08:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:16.506 08:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.506 08:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.506 08:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.506 08:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:16.506 08:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.506 08:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.506 08:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.506 08:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:16.506 08:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.506 08:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.506 08:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.506 08:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:16.506 08:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.506 08:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.506 08:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.506 08:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:16.506 08:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.506 08:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.506 08:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.506 08:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:16.506 08:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.506 08:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:16.506 08:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.506 08:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.506 08:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:16.506 08:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.506 08:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.506 08:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.506 08:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:16.506 08:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.506 08:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.506 08:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.506 08:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:16.506 08:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.506 08:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.506 08:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.506 08:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:16.506 08:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.506 08:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.506 08:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.506 08:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:16.506 08:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.506 08:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:16.506 08:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.506 08:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.506 08:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:16.506 08:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.506 08:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.506 08:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.506 08:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:16.506 08:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.506 08:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.506 08:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.506 08:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:16.506 08:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.506 08:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.506 08:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.506 08:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:16.506 08:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.506 08:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.506 08:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.506 08:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:16.506 08:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.506 08:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.506 08:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.506 08:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:16.506 08:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.506 08:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.506 08:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.506 08:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:16.506 08:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.506 08:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.506 08:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.883 08:19:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:17.884 08:19:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.884 08:19:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.884 08:19:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.884 08:19:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:17.884 08:19:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.884 08:19:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.884 08:19:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.884 08:19:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:17.884 08:19:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.884 08:19:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.884 08:19:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.884 08:19:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:17.884 08:19:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.884 08:19:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.884 08:19:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.884 08:19:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:17.884 ************************************ 00:06:17.884 END TEST accel_copy_crc32c_C2 00:06:17.884 ************************************ 00:06:17.884 08:19:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.884 08:19:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.884 08:19:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.884 08:19:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:17.884 08:19:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.884 08:19:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.884 08:19:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.884 08:19:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:17.884 08:19:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:17.884 08:19:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:17.884 00:06:17.884 real 0m1.526s 00:06:17.884 user 0m1.315s 00:06:17.884 sys 0m0.116s 00:06:17.884 08:19:09 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:17.884 08:19:09 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:17.884 08:19:09 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:17.884 08:19:09 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:17.884 08:19:09 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:17.884 08:19:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.884 08:19:09 accel -- common/autotest_common.sh@10 -- # set +x 00:06:17.884 ************************************ 00:06:17.884 START TEST accel_dualcast 00:06:17.884 ************************************ 00:06:17.884 08:19:09 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:06:17.884 08:19:09 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:17.884 08:19:09 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:17.884 08:19:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:17.884 08:19:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:17.884 08:19:09 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:17.884 08:19:09 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:17.884 08:19:09 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:17.884 08:19:09 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:17.884 08:19:09 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:17.884 08:19:09 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:17.884 08:19:09 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:17.884 08:19:09 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:17.884 08:19:09 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:17.884 08:19:09 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:17.884 [2024-07-15 08:19:09.792602] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:17.884 [2024-07-15 08:19:09.792698] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61645 ] 00:06:17.884 [2024-07-15 08:19:09.931553] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.143 [2024-07-15 08:19:10.060016] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.143 08:19:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:18.143 08:19:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:18.143 08:19:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:18.143 08:19:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:18.143 08:19:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:18.143 08:19:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:18.143 08:19:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:18.143 08:19:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:18.143 08:19:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:18.143 08:19:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:18.143 08:19:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:18.143 08:19:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:18.143 08:19:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:18.143 08:19:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:18.143 08:19:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:18.143 08:19:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:18.143 08:19:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:18.143 08:19:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:18.143 08:19:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:18.143 08:19:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:18.143 08:19:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:18.143 08:19:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:18.143 08:19:10 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:18.143 08:19:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:18.143 08:19:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:18.143 08:19:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:18.143 08:19:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:18.143 08:19:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:18.143 08:19:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:18.143 08:19:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:18.143 08:19:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:18.143 08:19:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:18.143 08:19:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:18.143 08:19:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:18.143 08:19:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:18.143 08:19:10 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:18.143 08:19:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:18.143 08:19:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:18.143 08:19:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:18.143 08:19:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:18.143 08:19:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:18.143 08:19:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:18.143 08:19:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:18.143 08:19:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:18.143 08:19:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:18.143 08:19:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:18.143 08:19:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:18.143 08:19:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:18.143 08:19:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:18.143 08:19:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:18.143 08:19:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:18.143 08:19:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:18.143 08:19:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:18.143 08:19:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:18.143 08:19:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:18.143 08:19:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:18.143 08:19:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:18.143 08:19:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:18.143 08:19:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:18.143 08:19:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:18.143 08:19:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:18.143 08:19:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:18.143 08:19:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:18.143 08:19:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:18.143 08:19:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:18.143 08:19:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:19.517 08:19:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:19.517 08:19:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:19.517 08:19:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:19.517 08:19:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:19.517 08:19:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:19.517 08:19:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:19.517 08:19:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:19.517 08:19:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:19.517 08:19:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:19.517 08:19:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:19.517 08:19:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:19.518 08:19:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:19.518 08:19:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:19.518 ************************************ 00:06:19.518 END TEST accel_dualcast 00:06:19.518 ************************************ 00:06:19.518 08:19:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:19.518 08:19:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:19.518 08:19:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:19.518 08:19:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:19.518 08:19:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:19.518 08:19:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:19.518 08:19:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:19.518 08:19:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:19.518 08:19:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:19.518 08:19:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:19.518 08:19:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:19.518 08:19:11 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:19.518 08:19:11 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:19.518 08:19:11 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:19.518 00:06:19.518 real 0m1.517s 00:06:19.518 user 0m1.308s 00:06:19.518 sys 0m0.115s 00:06:19.518 08:19:11 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:19.518 08:19:11 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:19.518 08:19:11 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:19.518 08:19:11 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:19.518 08:19:11 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:19.518 08:19:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.518 08:19:11 accel -- common/autotest_common.sh@10 -- # set +x 00:06:19.518 ************************************ 00:06:19.518 START TEST accel_compare 00:06:19.518 ************************************ 00:06:19.518 08:19:11 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:06:19.518 08:19:11 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:19.518 08:19:11 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:19.518 08:19:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:19.518 08:19:11 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:19.518 08:19:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:19.518 08:19:11 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:19.518 08:19:11 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:19.518 08:19:11 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:19.518 08:19:11 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:19.518 08:19:11 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.518 08:19:11 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.518 08:19:11 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:19.518 08:19:11 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:19.518 08:19:11 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:19.518 [2024-07-15 08:19:11.362526] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:19.518 [2024-07-15 08:19:11.362672] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61681 ] 00:06:19.518 [2024-07-15 08:19:11.505144] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.518 [2024-07-15 08:19:11.659886] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.776 08:19:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:19.776 08:19:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:19.776 08:19:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:19.776 08:19:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:19.776 08:19:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:19.776 08:19:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:19.776 08:19:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:19.776 08:19:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:19.776 08:19:11 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:19.776 08:19:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:19.776 08:19:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:19.776 08:19:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:19.776 08:19:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:19.776 08:19:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:19.776 08:19:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:19.776 08:19:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:19.776 08:19:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:19.776 08:19:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:19.776 08:19:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:19.776 08:19:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:19.776 08:19:11 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:19.776 08:19:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:19.776 08:19:11 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:19.776 08:19:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:19.776 08:19:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:19.776 08:19:11 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:19.776 08:19:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:19.776 08:19:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:19.776 08:19:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:19.776 08:19:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:19.776 08:19:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:19.776 08:19:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:19.776 08:19:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:19.776 08:19:11 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:19.776 08:19:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:19.776 08:19:11 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:19.776 08:19:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:19.776 08:19:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:19.776 08:19:11 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:19.776 08:19:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:19.776 08:19:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:19.776 08:19:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:19.776 08:19:11 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:19.776 08:19:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:19.776 08:19:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:19.776 08:19:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:19.776 08:19:11 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:19.776 08:19:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:19.776 08:19:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:19.776 08:19:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:19.776 08:19:11 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:19.776 08:19:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:19.776 08:19:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:19.776 08:19:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:19.776 08:19:11 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:19.776 08:19:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:19.776 08:19:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:19.776 08:19:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:19.776 08:19:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:19.776 08:19:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:19.776 08:19:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:19.776 08:19:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:19.776 08:19:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:19.776 08:19:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:19.776 08:19:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:19.776 08:19:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:20.811 08:19:12 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:20.811 08:19:12 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:20.811 08:19:12 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:20.811 08:19:12 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:20.811 08:19:12 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:20.811 08:19:12 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:20.811 08:19:12 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:20.811 08:19:12 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:20.811 08:19:12 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:20.811 08:19:12 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:20.811 08:19:12 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:20.811 08:19:12 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:20.811 08:19:12 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:20.811 08:19:12 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:20.811 08:19:12 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:20.811 08:19:12 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:20.811 08:19:12 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:20.811 08:19:12 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:20.811 08:19:12 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:20.811 08:19:12 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:20.811 ************************************ 00:06:20.811 END TEST accel_compare 00:06:20.811 ************************************ 00:06:20.811 08:19:12 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:20.811 08:19:12 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:20.811 08:19:12 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:20.811 08:19:12 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:20.811 08:19:12 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:20.811 08:19:12 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:20.811 08:19:12 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:20.811 00:06:20.811 real 0m1.587s 00:06:20.811 user 0m1.368s 00:06:20.811 sys 0m0.122s 00:06:20.811 08:19:12 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.811 08:19:12 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:20.811 08:19:12 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:20.811 08:19:12 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:20.811 08:19:12 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:20.811 08:19:12 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.811 08:19:12 accel -- common/autotest_common.sh@10 -- # set +x 00:06:21.093 ************************************ 00:06:21.093 START TEST accel_xor 00:06:21.093 ************************************ 00:06:21.093 08:19:12 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:06:21.093 08:19:12 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:21.093 08:19:12 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:21.093 08:19:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.093 08:19:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.093 08:19:12 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:21.093 08:19:12 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:21.093 08:19:12 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:21.093 08:19:12 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:21.093 08:19:12 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:21.093 08:19:12 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:21.093 08:19:12 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:21.093 08:19:12 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:21.093 08:19:12 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:21.093 08:19:12 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:21.093 [2024-07-15 08:19:12.997171] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:21.093 [2024-07-15 08:19:12.997271] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61716 ] 00:06:21.093 [2024-07-15 08:19:13.137107] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.352 [2024-07-15 08:19:13.266860] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.352 08:19:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:21.352 08:19:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.352 08:19:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.352 08:19:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.352 08:19:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:21.352 08:19:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.352 08:19:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.352 08:19:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.352 08:19:13 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:21.352 08:19:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.352 08:19:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.352 08:19:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.352 08:19:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:21.352 08:19:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.352 08:19:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.352 08:19:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.352 08:19:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:21.352 08:19:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.352 08:19:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.352 08:19:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.352 08:19:13 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:21.352 08:19:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.352 08:19:13 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:21.352 08:19:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.352 08:19:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.352 08:19:13 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:21.352 08:19:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.352 08:19:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.352 08:19:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.352 08:19:13 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:21.352 08:19:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.352 08:19:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.352 08:19:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.352 08:19:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:21.352 08:19:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.352 08:19:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.352 08:19:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.352 08:19:13 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:21.352 08:19:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.352 08:19:13 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:21.352 08:19:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.352 08:19:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.352 08:19:13 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:21.352 08:19:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.352 08:19:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.352 08:19:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.352 08:19:13 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:21.352 08:19:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.352 08:19:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.352 08:19:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.352 08:19:13 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:21.352 08:19:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.352 08:19:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.352 08:19:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.352 08:19:13 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:21.352 08:19:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.352 08:19:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.352 08:19:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.352 08:19:13 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:21.352 08:19:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.352 08:19:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.352 08:19:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.352 08:19:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:21.352 08:19:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.352 08:19:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.352 08:19:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.352 08:19:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:21.352 08:19:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.352 08:19:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.352 08:19:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.728 08:19:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:22.728 ************************************ 00:06:22.728 END TEST accel_xor 00:06:22.728 ************************************ 00:06:22.728 08:19:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.728 08:19:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.728 08:19:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.728 08:19:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:22.728 08:19:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.728 08:19:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.728 08:19:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.728 08:19:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:22.728 08:19:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.728 08:19:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.728 08:19:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.728 08:19:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:22.728 08:19:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.728 08:19:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.728 08:19:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.728 08:19:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:22.728 08:19:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.728 08:19:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.728 08:19:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.728 08:19:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:22.728 08:19:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.728 08:19:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.728 08:19:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.728 08:19:14 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:22.728 08:19:14 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:22.728 08:19:14 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:22.728 00:06:22.728 real 0m1.529s 00:06:22.728 user 0m0.015s 00:06:22.728 sys 0m0.003s 00:06:22.728 08:19:14 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:22.728 08:19:14 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:22.728 08:19:14 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:22.728 08:19:14 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:22.728 08:19:14 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:22.728 08:19:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.728 08:19:14 accel -- common/autotest_common.sh@10 -- # set +x 00:06:22.728 ************************************ 00:06:22.728 START TEST accel_xor 00:06:22.728 ************************************ 00:06:22.728 08:19:14 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:06:22.728 08:19:14 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:22.728 08:19:14 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:22.728 08:19:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.728 08:19:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.728 08:19:14 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:22.728 08:19:14 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:22.728 08:19:14 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:22.728 08:19:14 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:22.728 08:19:14 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:22.728 08:19:14 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:22.728 08:19:14 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:22.728 08:19:14 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:22.728 08:19:14 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:22.728 08:19:14 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:22.729 [2024-07-15 08:19:14.565970] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:22.729 [2024-07-15 08:19:14.566061] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61750 ] 00:06:22.729 [2024-07-15 08:19:14.698376] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.729 [2024-07-15 08:19:14.818002] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.729 08:19:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:22.729 08:19:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.729 08:19:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.729 08:19:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.729 08:19:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:22.729 08:19:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.729 08:19:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.729 08:19:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.729 08:19:14 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:22.729 08:19:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.729 08:19:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.729 08:19:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.729 08:19:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:22.729 08:19:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.729 08:19:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.729 08:19:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.729 08:19:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:22.729 08:19:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.729 08:19:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.729 08:19:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.729 08:19:14 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:22.729 08:19:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.729 08:19:14 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:22.729 08:19:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.729 08:19:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.729 08:19:14 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:22.729 08:19:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.729 08:19:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.729 08:19:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.729 08:19:14 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:22.729 08:19:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.729 08:19:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.729 08:19:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.729 08:19:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:22.729 08:19:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.729 08:19:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.729 08:19:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.729 08:19:14 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:22.729 08:19:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.729 08:19:14 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:22.729 08:19:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.729 08:19:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.729 08:19:14 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:22.729 08:19:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.729 08:19:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.729 08:19:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.729 08:19:14 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:22.729 08:19:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.729 08:19:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.729 08:19:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.729 08:19:14 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:22.729 08:19:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.729 08:19:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.729 08:19:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.729 08:19:14 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:22.729 08:19:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.729 08:19:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.729 08:19:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.729 08:19:14 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:22.729 08:19:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.729 08:19:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.729 08:19:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.729 08:19:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:22.729 08:19:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.729 08:19:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.729 08:19:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.729 08:19:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:22.729 08:19:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.729 08:19:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.729 08:19:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.101 08:19:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:24.101 08:19:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.101 08:19:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.101 08:19:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.101 08:19:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:24.101 08:19:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.101 08:19:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.101 08:19:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.101 08:19:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:24.101 08:19:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.101 08:19:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.101 08:19:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.101 08:19:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:24.101 08:19:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.101 08:19:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.101 08:19:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.101 08:19:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:24.101 08:19:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.101 08:19:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.101 08:19:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.101 08:19:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:24.101 08:19:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.101 08:19:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.101 08:19:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.101 08:19:16 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:24.101 08:19:16 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:24.101 08:19:16 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:24.101 00:06:24.101 real 0m1.499s 00:06:24.101 user 0m1.292s 00:06:24.101 sys 0m0.112s 00:06:24.101 08:19:16 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:24.101 ************************************ 00:06:24.101 END TEST accel_xor 00:06:24.101 ************************************ 00:06:24.101 08:19:16 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:24.101 08:19:16 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:24.101 08:19:16 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:24.101 08:19:16 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:24.101 08:19:16 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.101 08:19:16 accel -- common/autotest_common.sh@10 -- # set +x 00:06:24.101 ************************************ 00:06:24.101 START TEST accel_dif_verify 00:06:24.101 ************************************ 00:06:24.101 08:19:16 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:06:24.101 08:19:16 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:24.101 08:19:16 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:24.101 08:19:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:24.101 08:19:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:24.101 08:19:16 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:24.101 08:19:16 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:24.101 08:19:16 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:24.101 08:19:16 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:24.101 08:19:16 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:24.101 08:19:16 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.101 08:19:16 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.101 08:19:16 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:24.101 08:19:16 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:24.101 08:19:16 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:24.101 [2024-07-15 08:19:16.122582] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:24.102 [2024-07-15 08:19:16.122698] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61785 ] 00:06:24.102 [2024-07-15 08:19:16.259466] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.360 [2024-07-15 08:19:16.380886] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.360 08:19:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:24.360 08:19:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:24.360 08:19:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:24.360 08:19:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:24.360 08:19:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:24.360 08:19:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:24.360 08:19:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:24.360 08:19:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:24.360 08:19:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:24.360 08:19:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:24.360 08:19:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:24.360 08:19:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:24.360 08:19:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:24.360 08:19:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:24.360 08:19:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:24.360 08:19:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:24.360 08:19:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:24.360 08:19:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:24.360 08:19:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:24.360 08:19:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:24.360 08:19:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:24.360 08:19:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:24.360 08:19:16 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:24.360 08:19:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:24.360 08:19:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:24.360 08:19:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:24.360 08:19:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:24.360 08:19:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:24.360 08:19:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:24.360 08:19:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:24.360 08:19:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:24.360 08:19:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:24.360 08:19:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:24.360 08:19:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:24.360 08:19:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:24.360 08:19:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:24.360 08:19:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:24.360 08:19:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:24.360 08:19:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:24.360 08:19:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:24.360 08:19:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:24.360 08:19:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:24.360 08:19:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:24.360 08:19:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:24.360 08:19:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:24.360 08:19:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:24.360 08:19:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:24.360 08:19:16 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:24.360 08:19:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:24.360 08:19:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:24.360 08:19:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:24.360 08:19:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:24.360 08:19:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:24.360 08:19:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:24.360 08:19:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:24.360 08:19:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:24.360 08:19:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:24.360 08:19:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:24.360 08:19:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:24.361 08:19:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:24.361 08:19:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:24.361 08:19:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:24.361 08:19:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:24.361 08:19:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:24.361 08:19:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:24.361 08:19:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:24.361 08:19:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:24.361 08:19:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:24.361 08:19:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:24.361 08:19:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:24.361 08:19:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:24.361 08:19:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:24.361 08:19:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:24.361 08:19:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:24.361 08:19:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:24.361 08:19:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:24.361 08:19:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:24.361 08:19:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:25.732 08:19:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:25.732 08:19:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:25.732 08:19:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:25.732 08:19:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:25.732 08:19:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:25.732 08:19:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:25.732 08:19:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:25.732 08:19:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:25.732 08:19:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:25.732 08:19:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:25.732 08:19:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:25.732 08:19:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:25.732 08:19:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:25.733 08:19:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:25.733 08:19:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:25.733 08:19:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:25.733 08:19:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:25.733 08:19:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:25.733 08:19:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:25.733 08:19:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:25.733 08:19:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:25.733 08:19:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:25.733 08:19:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:25.733 08:19:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:25.733 08:19:17 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:25.733 08:19:17 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:25.733 08:19:17 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:25.733 00:06:25.733 real 0m1.511s 00:06:25.733 user 0m1.304s 00:06:25.733 sys 0m0.113s 00:06:25.733 08:19:17 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:25.733 08:19:17 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:25.733 ************************************ 00:06:25.733 END TEST accel_dif_verify 00:06:25.733 ************************************ 00:06:25.733 08:19:17 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:25.733 08:19:17 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:25.733 08:19:17 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:25.733 08:19:17 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:25.733 08:19:17 accel -- common/autotest_common.sh@10 -- # set +x 00:06:25.733 ************************************ 00:06:25.733 START TEST accel_dif_generate 00:06:25.733 ************************************ 00:06:25.733 08:19:17 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:06:25.733 08:19:17 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:25.733 08:19:17 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:25.733 08:19:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:25.733 08:19:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:25.733 08:19:17 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:25.733 08:19:17 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:25.733 08:19:17 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:25.733 08:19:17 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:25.733 08:19:17 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:25.733 08:19:17 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:25.733 08:19:17 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:25.733 08:19:17 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:25.733 08:19:17 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:25.733 08:19:17 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:25.733 [2024-07-15 08:19:17.687752] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:25.733 [2024-07-15 08:19:17.687857] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61819 ] 00:06:25.733 [2024-07-15 08:19:17.826214] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.992 [2024-07-15 08:19:17.944896] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.992 08:19:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:25.992 08:19:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:25.992 08:19:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:25.992 08:19:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:25.992 08:19:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:25.992 08:19:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:25.992 08:19:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:25.992 08:19:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:25.992 08:19:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:25.992 08:19:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:25.992 08:19:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:25.992 08:19:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:25.992 08:19:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:25.992 08:19:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:25.992 08:19:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:25.992 08:19:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:25.992 08:19:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:25.992 08:19:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:25.992 08:19:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:25.992 08:19:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:25.992 08:19:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:25.992 08:19:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:25.992 08:19:18 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:25.992 08:19:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:25.992 08:19:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:25.992 08:19:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:25.992 08:19:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:25.992 08:19:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:25.992 08:19:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:25.992 08:19:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:25.992 08:19:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:25.992 08:19:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:25.992 08:19:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:25.992 08:19:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:25.992 08:19:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:25.992 08:19:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:25.992 08:19:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:25.992 08:19:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:25.992 08:19:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:25.992 08:19:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:25.992 08:19:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:25.992 08:19:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:25.992 08:19:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:25.992 08:19:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:25.992 08:19:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:25.992 08:19:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:25.992 08:19:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:25.992 08:19:18 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:25.992 08:19:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:25.992 08:19:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:25.992 08:19:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:25.992 08:19:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:25.992 08:19:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:25.992 08:19:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:25.992 08:19:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:25.992 08:19:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:25.992 08:19:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:25.992 08:19:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:25.992 08:19:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:25.992 08:19:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:25.992 08:19:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:25.992 08:19:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:25.992 08:19:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:25.992 08:19:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:25.992 08:19:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:25.992 08:19:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:25.992 08:19:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:25.992 08:19:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:25.992 08:19:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:25.992 08:19:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:25.992 08:19:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:25.992 08:19:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:25.992 08:19:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:25.992 08:19:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:25.992 08:19:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:25.992 08:19:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:25.992 08:19:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:25.992 08:19:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:27.392 08:19:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:27.392 08:19:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:27.392 08:19:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:27.392 08:19:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:27.392 08:19:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:27.392 08:19:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:27.392 08:19:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:27.392 08:19:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:27.392 08:19:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:27.392 08:19:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:27.392 08:19:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:27.392 08:19:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:27.392 08:19:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:27.392 08:19:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:27.392 08:19:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:27.392 08:19:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:27.392 08:19:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:27.392 08:19:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:27.392 08:19:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:27.392 08:19:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:27.392 ************************************ 00:06:27.392 END TEST accel_dif_generate 00:06:27.392 ************************************ 00:06:27.392 08:19:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:27.392 08:19:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:27.392 08:19:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:27.392 08:19:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:27.392 08:19:19 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:27.392 08:19:19 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:27.392 08:19:19 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:27.392 00:06:27.392 real 0m1.508s 00:06:27.392 user 0m1.294s 00:06:27.392 sys 0m0.122s 00:06:27.392 08:19:19 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:27.392 08:19:19 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:27.392 08:19:19 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:27.392 08:19:19 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:27.392 08:19:19 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:27.392 08:19:19 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.392 08:19:19 accel -- common/autotest_common.sh@10 -- # set +x 00:06:27.392 ************************************ 00:06:27.392 START TEST accel_dif_generate_copy 00:06:27.392 ************************************ 00:06:27.392 08:19:19 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:06:27.392 08:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:27.392 08:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:27.392 08:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:27.392 08:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:27.392 08:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:27.392 08:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:27.392 08:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:27.392 08:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:27.392 08:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:27.392 08:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.392 08:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.392 08:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:27.392 08:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:27.392 08:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:27.392 [2024-07-15 08:19:19.243511] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:27.392 [2024-07-15 08:19:19.243625] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61854 ] 00:06:27.392 [2024-07-15 08:19:19.383354] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.392 [2024-07-15 08:19:19.511228] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.651 08:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:27.651 08:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:27.651 08:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:27.651 08:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:27.651 08:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:27.651 08:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:27.651 08:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:27.651 08:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:27.651 08:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:27.651 08:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:27.651 08:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:27.651 08:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:27.651 08:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:27.651 08:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:27.651 08:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:27.651 08:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:27.651 08:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:27.651 08:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:27.651 08:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:27.651 08:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:27.651 08:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:27.651 08:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:27.651 08:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:27.651 08:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:27.651 08:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:27.651 08:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:27.651 08:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:27.651 08:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:27.651 08:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:27.651 08:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:27.651 08:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:27.651 08:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:27.651 08:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:27.651 08:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:27.651 08:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:27.651 08:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:27.651 08:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:27.651 08:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:27.651 08:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:27.651 08:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:27.651 08:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:27.651 08:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:27.651 08:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:27.651 08:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:27.651 08:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:27.651 08:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:27.651 08:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:27.651 08:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:27.651 08:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:27.651 08:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:27.651 08:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:27.651 08:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:27.651 08:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:27.651 08:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:27.651 08:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:27.651 08:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:27.651 08:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:27.651 08:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:27.651 08:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:27.651 08:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:27.651 08:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:27.651 08:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:27.651 08:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:27.651 08:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:27.651 08:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:27.651 08:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:27.651 08:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:27.651 08:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:27.651 08:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:27.651 08:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.586 08:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:28.586 08:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.586 08:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.586 08:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.586 08:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:28.586 08:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.586 08:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.586 08:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.586 08:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:28.586 08:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.586 08:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.586 08:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.586 08:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:28.586 08:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.586 08:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.586 08:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.586 08:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:28.586 08:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.586 08:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.586 08:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.586 08:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:28.586 08:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.586 08:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.586 08:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.586 08:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:28.586 ************************************ 00:06:28.586 END TEST accel_dif_generate_copy 00:06:28.586 ************************************ 00:06:28.586 08:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:28.586 08:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:28.586 00:06:28.586 real 0m1.511s 00:06:28.586 user 0m1.304s 00:06:28.586 sys 0m0.113s 00:06:28.586 08:19:20 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:28.586 08:19:20 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:28.845 08:19:20 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:28.845 08:19:20 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:28.845 08:19:20 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:28.845 08:19:20 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:28.845 08:19:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.845 08:19:20 accel -- common/autotest_common.sh@10 -- # set +x 00:06:28.845 ************************************ 00:06:28.845 START TEST accel_comp 00:06:28.845 ************************************ 00:06:28.845 08:19:20 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:28.845 08:19:20 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:28.845 08:19:20 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:28.845 08:19:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:28.845 08:19:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:28.845 08:19:20 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:28.845 08:19:20 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:28.845 08:19:20 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:28.845 08:19:20 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:28.845 08:19:20 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:28.845 08:19:20 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:28.845 08:19:20 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:28.845 08:19:20 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:28.845 08:19:20 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:28.845 08:19:20 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:28.845 [2024-07-15 08:19:20.799824] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:28.845 [2024-07-15 08:19:20.799911] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61890 ] 00:06:28.845 [2024-07-15 08:19:20.930917] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.104 [2024-07-15 08:19:21.045354] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.104 08:19:21 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:29.104 08:19:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.104 08:19:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:29.104 08:19:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:29.104 08:19:21 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:29.104 08:19:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.104 08:19:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:29.104 08:19:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:29.104 08:19:21 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:29.104 08:19:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.104 08:19:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:29.104 08:19:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:29.104 08:19:21 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:29.104 08:19:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.104 08:19:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:29.104 08:19:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:29.104 08:19:21 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:29.104 08:19:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.104 08:19:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:29.104 08:19:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:29.104 08:19:21 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:29.104 08:19:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.104 08:19:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:29.104 08:19:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:29.104 08:19:21 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:29.104 08:19:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.104 08:19:21 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:29.104 08:19:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:29.104 08:19:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:29.104 08:19:21 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:29.104 08:19:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.104 08:19:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:29.104 08:19:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:29.104 08:19:21 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:29.104 08:19:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.104 08:19:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:29.104 08:19:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:29.104 08:19:21 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:29.105 08:19:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.105 08:19:21 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:29.105 08:19:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:29.105 08:19:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:29.105 08:19:21 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:29.105 08:19:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.105 08:19:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:29.105 08:19:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:29.105 08:19:21 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:29.105 08:19:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.105 08:19:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:29.105 08:19:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:29.105 08:19:21 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:29.105 08:19:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.105 08:19:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:29.105 08:19:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:29.105 08:19:21 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:29.105 08:19:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.105 08:19:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:29.105 08:19:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:29.105 08:19:21 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:29.105 08:19:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.105 08:19:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:29.105 08:19:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:29.105 08:19:21 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:29.105 08:19:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.105 08:19:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:29.105 08:19:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:29.105 08:19:21 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:29.105 08:19:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.105 08:19:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:29.105 08:19:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:29.105 08:19:21 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:29.105 08:19:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.105 08:19:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:29.105 08:19:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:30.514 08:19:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:30.514 08:19:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.514 08:19:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:30.514 08:19:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:30.514 08:19:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:30.514 08:19:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.514 08:19:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:30.514 08:19:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:30.514 08:19:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:30.514 08:19:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.514 08:19:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:30.514 08:19:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:30.514 08:19:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:30.514 08:19:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.514 08:19:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:30.514 08:19:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:30.514 08:19:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:30.514 08:19:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.514 08:19:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:30.514 08:19:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:30.514 08:19:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:30.514 08:19:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.514 08:19:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:30.514 08:19:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:30.514 08:19:22 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:30.514 08:19:22 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:30.514 08:19:22 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:30.514 00:06:30.514 real 0m1.487s 00:06:30.514 user 0m1.296s 00:06:30.514 sys 0m0.098s 00:06:30.514 08:19:22 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:30.514 ************************************ 00:06:30.514 END TEST accel_comp 00:06:30.514 ************************************ 00:06:30.514 08:19:22 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:30.514 08:19:22 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:30.514 08:19:22 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:30.514 08:19:22 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:30.514 08:19:22 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.514 08:19:22 accel -- common/autotest_common.sh@10 -- # set +x 00:06:30.514 ************************************ 00:06:30.514 START TEST accel_decomp 00:06:30.514 ************************************ 00:06:30.514 08:19:22 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:30.514 08:19:22 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:30.514 08:19:22 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:30.514 08:19:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:30.514 08:19:22 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:30.514 08:19:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:30.514 08:19:22 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:30.514 08:19:22 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:30.514 08:19:22 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:30.514 08:19:22 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:30.514 08:19:22 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.514 08:19:22 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.514 08:19:22 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:30.514 08:19:22 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:30.514 08:19:22 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:30.514 [2024-07-15 08:19:22.344314] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:30.514 [2024-07-15 08:19:22.344419] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61926 ] 00:06:30.514 [2024-07-15 08:19:22.483896] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.514 [2024-07-15 08:19:22.613648] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.514 08:19:22 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:30.514 08:19:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.514 08:19:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:30.514 08:19:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:30.514 08:19:22 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:30.514 08:19:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.514 08:19:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:30.514 08:19:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:30.514 08:19:22 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:30.514 08:19:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.514 08:19:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:30.514 08:19:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:30.514 08:19:22 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:30.514 08:19:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.514 08:19:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:30.514 08:19:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:30.514 08:19:22 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:30.514 08:19:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.515 08:19:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:30.515 08:19:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:30.515 08:19:22 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:30.515 08:19:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.515 08:19:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:30.515 08:19:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:30.515 08:19:22 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:30.515 08:19:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.515 08:19:22 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:30.515 08:19:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:30.515 08:19:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:30.515 08:19:22 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:30.515 08:19:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.515 08:19:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:30.515 08:19:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:30.515 08:19:22 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:30.515 08:19:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.515 08:19:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:30.774 08:19:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:30.774 08:19:22 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:30.774 08:19:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.774 08:19:22 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:30.774 08:19:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:30.774 08:19:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:30.774 08:19:22 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:30.774 08:19:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.774 08:19:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:30.774 08:19:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:30.774 08:19:22 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:30.774 08:19:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.774 08:19:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:30.774 08:19:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:30.774 08:19:22 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:30.774 08:19:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.774 08:19:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:30.774 08:19:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:30.774 08:19:22 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:30.774 08:19:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.774 08:19:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:30.774 08:19:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:30.774 08:19:22 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:30.774 08:19:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.774 08:19:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:30.774 08:19:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:30.774 08:19:22 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:30.774 08:19:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.774 08:19:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:30.774 08:19:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:30.774 08:19:22 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:30.774 08:19:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.774 08:19:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:30.774 08:19:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:30.774 08:19:22 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:30.774 08:19:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.774 08:19:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:30.774 08:19:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:31.707 08:19:23 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:31.707 08:19:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.707 08:19:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:31.707 08:19:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:31.707 08:19:23 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:31.707 08:19:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.707 08:19:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:31.707 08:19:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:31.707 08:19:23 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:31.707 08:19:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.707 08:19:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:31.707 08:19:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:31.707 08:19:23 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:31.707 08:19:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.707 08:19:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:31.707 08:19:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:31.707 08:19:23 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:31.707 ************************************ 00:06:31.707 END TEST accel_decomp 00:06:31.707 ************************************ 00:06:31.707 08:19:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.707 08:19:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:31.707 08:19:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:31.707 08:19:23 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:31.707 08:19:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.707 08:19:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:31.707 08:19:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:31.707 08:19:23 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:31.707 08:19:23 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:31.707 08:19:23 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:31.707 00:06:31.707 real 0m1.524s 00:06:31.707 user 0m1.314s 00:06:31.707 sys 0m0.117s 00:06:31.707 08:19:23 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:31.707 08:19:23 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:31.965 08:19:23 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:31.965 08:19:23 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:31.965 08:19:23 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:31.965 08:19:23 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.965 08:19:23 accel -- common/autotest_common.sh@10 -- # set +x 00:06:31.965 ************************************ 00:06:31.965 START TEST accel_decomp_full 00:06:31.965 ************************************ 00:06:31.965 08:19:23 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:31.965 08:19:23 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:06:31.965 08:19:23 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:06:31.965 08:19:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:31.965 08:19:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:31.965 08:19:23 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:31.965 08:19:23 accel.accel_decomp_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:31.965 08:19:23 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:06:31.965 08:19:23 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:31.965 08:19:23 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:31.965 08:19:23 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.965 08:19:23 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.965 08:19:23 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:31.965 08:19:23 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:06:31.965 08:19:23 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:06:31.965 [2024-07-15 08:19:23.924951] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:31.966 [2024-07-15 08:19:23.925060] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61960 ] 00:06:31.966 [2024-07-15 08:19:24.062839] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.223 [2024-07-15 08:19:24.179253] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.223 08:19:24 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:32.223 08:19:24 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:32.223 08:19:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:32.223 08:19:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:32.223 08:19:24 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:32.223 08:19:24 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:32.223 08:19:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:32.223 08:19:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:32.223 08:19:24 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:32.223 08:19:24 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:32.223 08:19:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:32.223 08:19:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:32.223 08:19:24 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:06:32.223 08:19:24 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:32.223 08:19:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:32.223 08:19:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:32.223 08:19:24 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:32.223 08:19:24 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:32.223 08:19:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:32.223 08:19:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:32.223 08:19:24 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:32.223 08:19:24 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:32.223 08:19:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:32.223 08:19:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:32.223 08:19:24 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:06:32.223 08:19:24 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:32.223 08:19:24 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:32.223 08:19:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:32.223 08:19:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:32.223 08:19:24 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:32.223 08:19:24 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:32.223 08:19:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:32.223 08:19:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:32.223 08:19:24 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:32.223 08:19:24 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:32.223 08:19:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:32.223 08:19:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:32.223 08:19:24 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:06:32.223 08:19:24 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:32.223 08:19:24 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:06:32.223 08:19:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:32.223 08:19:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:32.223 08:19:24 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:32.223 08:19:24 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:32.223 08:19:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:32.223 08:19:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:32.223 08:19:24 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:32.223 08:19:24 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:32.223 08:19:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:32.223 08:19:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:32.223 08:19:24 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:32.223 08:19:24 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:32.223 08:19:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:32.223 08:19:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:32.223 08:19:24 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:06:32.223 08:19:24 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:32.223 08:19:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:32.223 08:19:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:32.223 08:19:24 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:32.223 08:19:24 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:32.223 08:19:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:32.223 08:19:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:32.223 08:19:24 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:06:32.223 08:19:24 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:32.223 08:19:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:32.223 08:19:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:32.224 08:19:24 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:32.224 08:19:24 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:32.224 08:19:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:32.224 08:19:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:32.224 08:19:24 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:32.224 08:19:24 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:32.224 08:19:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:32.224 08:19:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:33.599 08:19:25 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:33.599 08:19:25 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:33.599 08:19:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:33.599 08:19:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:33.599 08:19:25 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:33.599 08:19:25 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:33.599 08:19:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:33.599 08:19:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:33.599 08:19:25 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:33.599 08:19:25 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:33.599 08:19:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:33.599 08:19:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:33.599 08:19:25 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:33.599 08:19:25 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:33.599 08:19:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:33.599 08:19:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:33.599 08:19:25 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:33.599 08:19:25 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:33.599 08:19:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:33.599 08:19:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:33.599 08:19:25 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:33.599 08:19:25 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:33.599 08:19:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:33.599 08:19:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:33.599 08:19:25 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:33.599 08:19:25 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:33.599 08:19:25 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:33.599 00:06:33.599 real 0m1.520s 00:06:33.599 user 0m1.318s 00:06:33.599 sys 0m0.108s 00:06:33.599 08:19:25 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:33.599 ************************************ 00:06:33.599 END TEST accel_decomp_full 00:06:33.599 ************************************ 00:06:33.600 08:19:25 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:06:33.600 08:19:25 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:33.600 08:19:25 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:33.600 08:19:25 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:33.600 08:19:25 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.600 08:19:25 accel -- common/autotest_common.sh@10 -- # set +x 00:06:33.600 ************************************ 00:06:33.600 START TEST accel_decomp_mcore 00:06:33.600 ************************************ 00:06:33.600 08:19:25 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:33.600 08:19:25 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:33.600 08:19:25 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:33.600 08:19:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.600 08:19:25 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:33.600 08:19:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.600 08:19:25 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:33.600 08:19:25 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:33.600 08:19:25 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:33.600 08:19:25 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:33.600 08:19:25 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.600 08:19:25 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.600 08:19:25 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:33.600 08:19:25 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:33.600 08:19:25 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:33.600 [2024-07-15 08:19:25.488126] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:33.600 [2024-07-15 08:19:25.488229] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61995 ] 00:06:33.600 [2024-07-15 08:19:25.626716] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:33.859 [2024-07-15 08:19:25.783091] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:33.859 [2024-07-15 08:19:25.783245] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:33.859 [2024-07-15 08:19:25.783570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.859 [2024-07-15 08:19:25.783366] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:33.859 08:19:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:33.859 08:19:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.859 08:19:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.859 08:19:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.859 08:19:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:33.859 08:19:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.859 08:19:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.859 08:19:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.859 08:19:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:33.859 08:19:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.859 08:19:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.859 08:19:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.859 08:19:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:33.859 08:19:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.859 08:19:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.859 08:19:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.859 08:19:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:33.859 08:19:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.859 08:19:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.859 08:19:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.859 08:19:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:33.859 08:19:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.859 08:19:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.859 08:19:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.859 08:19:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:33.859 08:19:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.859 08:19:25 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:33.859 08:19:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.859 08:19:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.859 08:19:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:33.859 08:19:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.859 08:19:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.859 08:19:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.859 08:19:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:33.859 08:19:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.859 08:19:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.859 08:19:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.859 08:19:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:33.859 08:19:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.859 08:19:25 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:33.859 08:19:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.859 08:19:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.859 08:19:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:33.859 08:19:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.859 08:19:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.859 08:19:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.859 08:19:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:33.859 08:19:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.859 08:19:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.859 08:19:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.859 08:19:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:33.859 08:19:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.859 08:19:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.859 08:19:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.859 08:19:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:33.859 08:19:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.859 08:19:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.859 08:19:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.859 08:19:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:33.859 08:19:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.859 08:19:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.859 08:19:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.860 08:19:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:33.860 08:19:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.860 08:19:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.860 08:19:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.860 08:19:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:33.860 08:19:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.860 08:19:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.860 08:19:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.860 08:19:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:33.860 08:19:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.860 08:19:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.860 08:19:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.229 08:19:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:35.229 08:19:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.229 08:19:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.229 08:19:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.229 08:19:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:35.229 08:19:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.229 08:19:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.229 08:19:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.229 08:19:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:35.229 08:19:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.229 08:19:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.229 08:19:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.229 08:19:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:35.229 08:19:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.229 08:19:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.229 08:19:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.229 08:19:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:35.229 08:19:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.229 08:19:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.229 08:19:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.229 08:19:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:35.229 08:19:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.229 08:19:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.229 08:19:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.229 08:19:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:35.229 08:19:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.229 08:19:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.229 08:19:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.229 08:19:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:35.229 08:19:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.229 08:19:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.229 08:19:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.229 08:19:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:35.229 08:19:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.229 08:19:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.229 08:19:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.229 08:19:27 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:35.229 08:19:27 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:35.229 08:19:27 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:35.229 00:06:35.229 real 0m1.560s 00:06:35.229 user 0m4.722s 00:06:35.229 sys 0m0.135s 00:06:35.229 08:19:27 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:35.229 08:19:27 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:35.229 ************************************ 00:06:35.229 END TEST accel_decomp_mcore 00:06:35.229 ************************************ 00:06:35.229 08:19:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:35.229 08:19:27 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:35.229 08:19:27 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:35.229 08:19:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.229 08:19:27 accel -- common/autotest_common.sh@10 -- # set +x 00:06:35.229 ************************************ 00:06:35.229 START TEST accel_decomp_full_mcore 00:06:35.229 ************************************ 00:06:35.229 08:19:27 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:35.229 08:19:27 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:35.229 08:19:27 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:35.229 08:19:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.229 08:19:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.229 08:19:27 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:35.229 08:19:27 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:35.229 08:19:27 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:35.229 08:19:27 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:35.229 08:19:27 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:35.229 08:19:27 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.229 08:19:27 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.229 08:19:27 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:35.229 08:19:27 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:35.229 08:19:27 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:35.229 [2024-07-15 08:19:27.091076] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:35.229 [2024-07-15 08:19:27.091173] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62038 ] 00:06:35.229 [2024-07-15 08:19:27.229132] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:35.229 [2024-07-15 08:19:27.361034] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:35.229 [2024-07-15 08:19:27.361174] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:35.229 [2024-07-15 08:19:27.361273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:35.229 [2024-07-15 08:19:27.361279] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.485 08:19:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:35.485 08:19:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.485 08:19:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.485 08:19:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.485 08:19:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:35.485 08:19:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.485 08:19:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.485 08:19:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.485 08:19:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:35.485 08:19:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.485 08:19:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.485 08:19:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.485 08:19:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:35.485 08:19:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.485 08:19:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.485 08:19:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.485 08:19:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:35.485 08:19:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.485 08:19:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.485 08:19:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.485 08:19:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:35.485 08:19:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.485 08:19:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.485 08:19:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.485 08:19:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:35.486 08:19:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.486 08:19:27 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:35.486 08:19:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.486 08:19:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.486 08:19:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:35.486 08:19:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.486 08:19:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.486 08:19:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.486 08:19:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:35.486 08:19:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.486 08:19:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.486 08:19:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.486 08:19:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:35.486 08:19:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.486 08:19:27 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:35.486 08:19:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.486 08:19:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.486 08:19:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:35.486 08:19:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.486 08:19:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.486 08:19:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.486 08:19:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:35.486 08:19:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.486 08:19:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.486 08:19:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.486 08:19:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:35.486 08:19:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.486 08:19:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.486 08:19:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.486 08:19:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:35.486 08:19:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.486 08:19:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.486 08:19:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.486 08:19:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:35.486 08:19:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.486 08:19:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.486 08:19:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.486 08:19:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:35.486 08:19:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.486 08:19:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.486 08:19:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.486 08:19:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:35.486 08:19:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.486 08:19:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.486 08:19:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.486 08:19:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:35.486 08:19:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.486 08:19:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.486 08:19:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.858 08:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:36.858 08:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.858 08:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.858 08:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.858 08:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:36.858 08:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.858 08:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.858 08:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.858 08:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:36.858 08:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.858 08:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.858 08:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.858 08:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:36.858 08:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.858 08:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.858 08:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.858 08:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:36.858 08:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.858 08:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.858 08:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.858 08:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:36.858 08:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.858 08:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.858 08:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.858 08:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:36.858 08:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.858 08:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.858 08:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.858 08:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:36.858 08:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.858 08:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.858 08:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.858 08:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:36.858 08:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.858 08:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.858 08:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.858 08:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:36.858 08:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:36.858 08:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:36.858 00:06:36.858 real 0m1.539s 00:06:36.858 user 0m4.753s 00:06:36.858 sys 0m0.132s 00:06:36.858 08:19:28 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:36.858 ************************************ 00:06:36.858 END TEST accel_decomp_full_mcore 00:06:36.858 ************************************ 00:06:36.858 08:19:28 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:36.858 08:19:28 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:36.858 08:19:28 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:36.858 08:19:28 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:36.858 08:19:28 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.858 08:19:28 accel -- common/autotest_common.sh@10 -- # set +x 00:06:36.858 ************************************ 00:06:36.858 START TEST accel_decomp_mthread 00:06:36.858 ************************************ 00:06:36.858 08:19:28 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:36.858 08:19:28 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:36.858 08:19:28 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:36.858 08:19:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.858 08:19:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.858 08:19:28 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:36.858 08:19:28 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:36.858 08:19:28 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:36.858 08:19:28 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:36.858 08:19:28 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:36.858 08:19:28 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.858 08:19:28 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.858 08:19:28 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:36.858 08:19:28 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:36.858 08:19:28 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:36.858 [2024-07-15 08:19:28.676537] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:36.858 [2024-07-15 08:19:28.676631] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62070 ] 00:06:36.858 [2024-07-15 08:19:28.812146] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.858 [2024-07-15 08:19:28.930095] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.858 08:19:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:36.858 08:19:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.858 08:19:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.858 08:19:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.858 08:19:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:36.858 08:19:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.858 08:19:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.858 08:19:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.858 08:19:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:36.858 08:19:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.858 08:19:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.858 08:19:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.858 08:19:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:36.858 08:19:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.858 08:19:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.858 08:19:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.858 08:19:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:36.858 08:19:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.858 08:19:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.858 08:19:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.858 08:19:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:36.858 08:19:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.858 08:19:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.858 08:19:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.858 08:19:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:36.858 08:19:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.858 08:19:28 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:36.858 08:19:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.858 08:19:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.858 08:19:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:36.858 08:19:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.858 08:19:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.858 08:19:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.858 08:19:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:36.858 08:19:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.858 08:19:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.858 08:19:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.858 08:19:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:36.858 08:19:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.858 08:19:28 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:36.858 08:19:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.858 08:19:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.858 08:19:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:36.858 08:19:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.858 08:19:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.858 08:19:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.858 08:19:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:36.858 08:19:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.858 08:19:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.858 08:19:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.859 08:19:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:36.859 08:19:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.859 08:19:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.859 08:19:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.859 08:19:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:36.859 08:19:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.859 08:19:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.859 08:19:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.859 08:19:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:36.859 08:19:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.859 08:19:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.859 08:19:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.859 08:19:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:36.859 08:19:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.859 08:19:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.859 08:19:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.859 08:19:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:36.859 08:19:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.859 08:19:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.859 08:19:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.859 08:19:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:36.859 08:19:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.859 08:19:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.859 08:19:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.232 08:19:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:38.232 08:19:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.232 08:19:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.232 08:19:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.232 08:19:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:38.232 08:19:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.232 08:19:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.232 08:19:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.232 08:19:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:38.232 08:19:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.233 08:19:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.233 08:19:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.233 08:19:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:38.233 08:19:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.233 08:19:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.233 08:19:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.233 08:19:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:38.233 08:19:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.233 08:19:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.233 08:19:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.233 08:19:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:38.233 08:19:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.233 08:19:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.233 08:19:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.233 08:19:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:38.233 08:19:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.233 08:19:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.233 08:19:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.233 08:19:30 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:38.233 08:19:30 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:38.233 08:19:30 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:38.233 ************************************ 00:06:38.233 END TEST accel_decomp_mthread 00:06:38.233 ************************************ 00:06:38.233 00:06:38.233 real 0m1.507s 00:06:38.233 user 0m0.012s 00:06:38.233 sys 0m0.004s 00:06:38.233 08:19:30 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:38.233 08:19:30 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:38.233 08:19:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:38.233 08:19:30 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:38.233 08:19:30 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:38.233 08:19:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.233 08:19:30 accel -- common/autotest_common.sh@10 -- # set +x 00:06:38.233 ************************************ 00:06:38.233 START TEST accel_decomp_full_mthread 00:06:38.233 ************************************ 00:06:38.233 08:19:30 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:38.233 08:19:30 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:38.233 08:19:30 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:38.233 08:19:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.233 08:19:30 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:38.233 08:19:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.233 08:19:30 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:38.233 08:19:30 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:38.233 08:19:30 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:38.233 08:19:30 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:38.233 08:19:30 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.233 08:19:30 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.233 08:19:30 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:38.233 08:19:30 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:38.233 08:19:30 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:38.233 [2024-07-15 08:19:30.234095] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:38.233 [2024-07-15 08:19:30.234233] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62110 ] 00:06:38.233 [2024-07-15 08:19:30.377482] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.491 [2024-07-15 08:19:30.496485] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.491 08:19:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:38.491 08:19:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.491 08:19:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.491 08:19:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.491 08:19:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:38.491 08:19:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.491 08:19:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.491 08:19:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.491 08:19:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:38.491 08:19:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.491 08:19:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.491 08:19:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.491 08:19:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:38.491 08:19:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.491 08:19:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.491 08:19:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.491 08:19:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:38.491 08:19:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.491 08:19:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.491 08:19:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.491 08:19:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:38.491 08:19:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.491 08:19:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.491 08:19:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.491 08:19:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:38.491 08:19:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.491 08:19:30 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:38.491 08:19:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.491 08:19:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.491 08:19:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:38.491 08:19:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.491 08:19:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.491 08:19:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.491 08:19:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:38.491 08:19:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.491 08:19:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.491 08:19:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.491 08:19:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:38.491 08:19:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.491 08:19:30 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:38.491 08:19:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.491 08:19:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.491 08:19:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:38.491 08:19:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.491 08:19:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.491 08:19:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.491 08:19:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:38.491 08:19:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.491 08:19:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.491 08:19:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.491 08:19:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:38.492 08:19:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.492 08:19:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.492 08:19:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.492 08:19:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:38.492 08:19:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.492 08:19:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.492 08:19:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.492 08:19:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:38.492 08:19:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.492 08:19:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.492 08:19:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.492 08:19:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:38.492 08:19:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.492 08:19:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.492 08:19:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.492 08:19:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:38.492 08:19:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.492 08:19:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.492 08:19:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.492 08:19:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:38.492 08:19:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.492 08:19:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.492 08:19:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:39.865 08:19:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:39.865 08:19:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:39.865 08:19:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:39.865 08:19:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:39.865 08:19:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:39.865 08:19:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:39.865 08:19:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:39.865 08:19:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:39.865 08:19:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:39.865 08:19:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:39.865 08:19:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:39.865 08:19:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:39.865 08:19:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:39.865 08:19:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:39.865 08:19:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:39.865 08:19:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:39.865 08:19:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:39.865 08:19:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:39.865 08:19:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:39.865 08:19:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:39.865 08:19:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:39.865 08:19:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:39.865 08:19:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:39.865 08:19:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:39.865 08:19:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:39.865 08:19:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:39.865 08:19:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:39.865 08:19:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:39.865 08:19:31 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:39.865 08:19:31 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:39.865 08:19:31 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:39.865 00:06:39.865 real 0m1.544s 00:06:39.865 user 0m1.335s 00:06:39.865 sys 0m0.116s 00:06:39.865 08:19:31 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:39.865 08:19:31 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:39.865 ************************************ 00:06:39.865 END TEST accel_decomp_full_mthread 00:06:39.865 ************************************ 00:06:39.865 08:19:31 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:39.865 08:19:31 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:39.865 08:19:31 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:39.865 08:19:31 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:39.865 08:19:31 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:39.865 08:19:31 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:39.865 08:19:31 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.865 08:19:31 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:39.865 08:19:31 accel -- common/autotest_common.sh@10 -- # set +x 00:06:39.865 08:19:31 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:39.865 08:19:31 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.865 08:19:31 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:39.865 08:19:31 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:39.865 08:19:31 accel -- accel/accel.sh@41 -- # jq -r . 00:06:39.865 ************************************ 00:06:39.865 START TEST accel_dif_functional_tests 00:06:39.865 ************************************ 00:06:39.865 08:19:31 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:39.865 [2024-07-15 08:19:31.852959] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:39.865 [2024-07-15 08:19:31.853075] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62140 ] 00:06:39.865 [2024-07-15 08:19:31.989020] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:40.123 [2024-07-15 08:19:32.109873] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:40.123 [2024-07-15 08:19:32.110021] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:40.123 [2024-07-15 08:19:32.110025] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.123 [2024-07-15 08:19:32.165117] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:40.123 00:06:40.123 00:06:40.123 CUnit - A unit testing framework for C - Version 2.1-3 00:06:40.123 http://cunit.sourceforge.net/ 00:06:40.123 00:06:40.123 00:06:40.123 Suite: accel_dif 00:06:40.123 Test: verify: DIF generated, GUARD check ...passed 00:06:40.123 Test: verify: DIF generated, APPTAG check ...passed 00:06:40.123 Test: verify: DIF generated, REFTAG check ...passed 00:06:40.123 Test: verify: DIF not generated, GUARD check ...passed 00:06:40.123 Test: verify: DIF not generated, APPTAG check ...passed 00:06:40.123 Test: verify: DIF not generated, REFTAG check ...passed 00:06:40.123 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:40.123 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-15 08:19:32.204226] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:40.123 [2024-07-15 08:19:32.204317] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:40.123 [2024-07-15 08:19:32.204358] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:40.123 [2024-07-15 08:19:32.204429] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:40.123 passed 00:06:40.123 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:40.124 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:40.124 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:40.124 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:06:40.124 Test: verify copy: DIF generated, GUARD check ...passed 00:06:40.124 Test: verify copy: DIF generated, APPTAG check ...passed 00:06:40.124 Test: verify copy: DIF generated, REFTAG check ...passed 00:06:40.124 Test: verify copy: DIF not generated, GUARD check ...passed 00:06:40.124 Test: verify copy: DIF not generated, APPTAG check ...passed 00:06:40.124 Test: verify copy: DIF not generated, REFTAG check ...passed 00:06:40.124 Test: generate copy: DIF generated, GUARD check ...passed 00:06:40.124 Test: generate copy: DIF generated, APTTAG check ...[2024-07-15 08:19:32.204581] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:40.124 [2024-07-15 08:19:32.204780] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:40.124 [2024-07-15 08:19:32.204826] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:40.124 [2024-07-15 08:19:32.204865] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:40.124 passed 00:06:40.124 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:40.124 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:40.124 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:40.124 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:40.124 Test: generate copy: iovecs-len validate ...passed 00:06:40.124 Test: generate copy: buffer alignment validate ...passed 00:06:40.124 00:06:40.124 Run Summary: Type Total Ran Passed Failed Inactive 00:06:40.124 suites 1 1 n/a 0 0 00:06:40.124 tests 26 26 26 0 0 00:06:40.124 asserts 115 115 115 0 n/a 00:06:40.124 00:06:40.124 Elapsed time = 0.002 seconds[2024-07-15 08:19:32.205126] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:40.124 00:06:40.381 00:06:40.381 real 0m0.623s 00:06:40.381 user 0m0.827s 00:06:40.381 sys 0m0.158s 00:06:40.381 08:19:32 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:40.381 08:19:32 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:40.381 ************************************ 00:06:40.381 END TEST accel_dif_functional_tests 00:06:40.381 ************************************ 00:06:40.381 08:19:32 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:40.381 00:06:40.381 real 0m35.018s 00:06:40.381 user 0m36.753s 00:06:40.381 sys 0m3.957s 00:06:40.381 08:19:32 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:40.381 08:19:32 accel -- common/autotest_common.sh@10 -- # set +x 00:06:40.381 ************************************ 00:06:40.381 END TEST accel 00:06:40.381 ************************************ 00:06:40.381 08:19:32 -- common/autotest_common.sh@1142 -- # return 0 00:06:40.381 08:19:32 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:06:40.381 08:19:32 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:40.381 08:19:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.381 08:19:32 -- common/autotest_common.sh@10 -- # set +x 00:06:40.381 ************************************ 00:06:40.382 START TEST accel_rpc 00:06:40.382 ************************************ 00:06:40.382 08:19:32 accel_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:06:40.640 * Looking for test storage... 00:06:40.640 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:40.640 08:19:32 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:40.640 08:19:32 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=62210 00:06:40.640 08:19:32 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:40.640 08:19:32 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 62210 00:06:40.640 08:19:32 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 62210 ']' 00:06:40.640 08:19:32 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.640 08:19:32 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:40.640 08:19:32 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.640 08:19:32 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:40.640 08:19:32 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.640 [2024-07-15 08:19:32.635658] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:40.640 [2024-07-15 08:19:32.635781] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62210 ] 00:06:40.640 [2024-07-15 08:19:32.771285] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.898 [2024-07-15 08:19:32.888674] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.464 08:19:33 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:41.464 08:19:33 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:41.464 08:19:33 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:41.464 08:19:33 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:41.464 08:19:33 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:41.464 08:19:33 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:41.464 08:19:33 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:41.464 08:19:33 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:41.464 08:19:33 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.464 08:19:33 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.464 ************************************ 00:06:41.464 START TEST accel_assign_opcode 00:06:41.464 ************************************ 00:06:41.464 08:19:33 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:06:41.464 08:19:33 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:41.464 08:19:33 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:41.464 08:19:33 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:41.721 [2024-07-15 08:19:33.637316] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:41.721 08:19:33 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:41.721 08:19:33 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:41.721 08:19:33 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:41.721 08:19:33 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:41.721 [2024-07-15 08:19:33.645304] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:41.721 08:19:33 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:41.721 08:19:33 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:41.721 08:19:33 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:41.721 08:19:33 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:41.721 [2024-07-15 08:19:33.706113] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:41.721 08:19:33 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:41.721 08:19:33 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:41.721 08:19:33 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:41.721 08:19:33 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:41.721 08:19:33 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:41.721 08:19:33 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:41.721 08:19:33 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:41.979 software 00:06:41.979 ************************************ 00:06:41.979 END TEST accel_assign_opcode 00:06:41.979 ************************************ 00:06:41.979 00:06:41.979 real 0m0.303s 00:06:41.979 user 0m0.061s 00:06:41.979 sys 0m0.010s 00:06:41.979 08:19:33 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:41.979 08:19:33 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:41.979 08:19:33 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:41.979 08:19:33 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 62210 00:06:41.979 08:19:33 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 62210 ']' 00:06:41.979 08:19:33 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 62210 00:06:41.979 08:19:33 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:06:41.979 08:19:33 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:41.979 08:19:33 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62210 00:06:41.979 killing process with pid 62210 00:06:41.979 08:19:33 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:41.979 08:19:33 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:41.979 08:19:33 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62210' 00:06:41.979 08:19:33 accel_rpc -- common/autotest_common.sh@967 -- # kill 62210 00:06:41.979 08:19:33 accel_rpc -- common/autotest_common.sh@972 -- # wait 62210 00:06:42.238 00:06:42.238 real 0m1.887s 00:06:42.238 user 0m2.037s 00:06:42.238 sys 0m0.409s 00:06:42.238 08:19:34 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:42.238 ************************************ 00:06:42.238 END TEST accel_rpc 00:06:42.238 ************************************ 00:06:42.238 08:19:34 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:42.497 08:19:34 -- common/autotest_common.sh@1142 -- # return 0 00:06:42.497 08:19:34 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:42.497 08:19:34 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:42.497 08:19:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.497 08:19:34 -- common/autotest_common.sh@10 -- # set +x 00:06:42.497 ************************************ 00:06:42.497 START TEST app_cmdline 00:06:42.497 ************************************ 00:06:42.497 08:19:34 app_cmdline -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:42.497 * Looking for test storage... 00:06:42.497 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:42.497 08:19:34 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:42.497 08:19:34 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=62303 00:06:42.497 08:19:34 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 62303 00:06:42.497 08:19:34 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:42.497 08:19:34 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 62303 ']' 00:06:42.497 08:19:34 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.497 08:19:34 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:42.497 08:19:34 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.497 08:19:34 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:42.497 08:19:34 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:42.497 [2024-07-15 08:19:34.599496] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:42.497 [2024-07-15 08:19:34.600191] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62303 ] 00:06:42.755 [2024-07-15 08:19:34.743947] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.755 [2024-07-15 08:19:34.860482] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.755 [2024-07-15 08:19:34.914174] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:43.691 08:19:35 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:43.691 08:19:35 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:06:43.691 08:19:35 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:43.691 { 00:06:43.691 "version": "SPDK v24.09-pre git sha1 719d03c6a", 00:06:43.691 "fields": { 00:06:43.691 "major": 24, 00:06:43.691 "minor": 9, 00:06:43.691 "patch": 0, 00:06:43.691 "suffix": "-pre", 00:06:43.691 "commit": "719d03c6a" 00:06:43.691 } 00:06:43.691 } 00:06:43.691 08:19:35 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:43.691 08:19:35 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:43.691 08:19:35 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:43.691 08:19:35 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:43.691 08:19:35 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:43.691 08:19:35 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:43.691 08:19:35 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:43.691 08:19:35 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:43.691 08:19:35 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:43.691 08:19:35 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:43.949 08:19:35 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:43.949 08:19:35 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:43.949 08:19:35 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:43.949 08:19:35 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:06:43.949 08:19:35 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:43.949 08:19:35 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:43.949 08:19:35 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:43.950 08:19:35 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:43.950 08:19:35 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:43.950 08:19:35 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:43.950 08:19:35 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:43.950 08:19:35 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:43.950 08:19:35 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:43.950 08:19:35 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:44.208 request: 00:06:44.208 { 00:06:44.208 "method": "env_dpdk_get_mem_stats", 00:06:44.208 "req_id": 1 00:06:44.208 } 00:06:44.208 Got JSON-RPC error response 00:06:44.208 response: 00:06:44.208 { 00:06:44.208 "code": -32601, 00:06:44.208 "message": "Method not found" 00:06:44.208 } 00:06:44.208 08:19:36 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:06:44.208 08:19:36 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:44.208 08:19:36 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:44.208 08:19:36 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:44.208 08:19:36 app_cmdline -- app/cmdline.sh@1 -- # killprocess 62303 00:06:44.208 08:19:36 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 62303 ']' 00:06:44.208 08:19:36 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 62303 00:06:44.208 08:19:36 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:06:44.208 08:19:36 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:44.208 08:19:36 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62303 00:06:44.208 killing process with pid 62303 00:06:44.208 08:19:36 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:44.208 08:19:36 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:44.208 08:19:36 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62303' 00:06:44.208 08:19:36 app_cmdline -- common/autotest_common.sh@967 -- # kill 62303 00:06:44.208 08:19:36 app_cmdline -- common/autotest_common.sh@972 -- # wait 62303 00:06:44.467 00:06:44.467 real 0m2.158s 00:06:44.467 user 0m2.725s 00:06:44.467 sys 0m0.472s 00:06:44.467 08:19:36 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:44.467 ************************************ 00:06:44.467 END TEST app_cmdline 00:06:44.467 ************************************ 00:06:44.467 08:19:36 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:44.726 08:19:36 -- common/autotest_common.sh@1142 -- # return 0 00:06:44.726 08:19:36 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:44.726 08:19:36 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:44.726 08:19:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.726 08:19:36 -- common/autotest_common.sh@10 -- # set +x 00:06:44.726 ************************************ 00:06:44.726 START TEST version 00:06:44.726 ************************************ 00:06:44.726 08:19:36 version -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:44.726 * Looking for test storage... 00:06:44.726 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:44.726 08:19:36 version -- app/version.sh@17 -- # get_header_version major 00:06:44.726 08:19:36 version -- app/version.sh@14 -- # cut -f2 00:06:44.726 08:19:36 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:44.726 08:19:36 version -- app/version.sh@14 -- # tr -d '"' 00:06:44.726 08:19:36 version -- app/version.sh@17 -- # major=24 00:06:44.726 08:19:36 version -- app/version.sh@18 -- # get_header_version minor 00:06:44.726 08:19:36 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:44.726 08:19:36 version -- app/version.sh@14 -- # cut -f2 00:06:44.726 08:19:36 version -- app/version.sh@14 -- # tr -d '"' 00:06:44.726 08:19:36 version -- app/version.sh@18 -- # minor=9 00:06:44.726 08:19:36 version -- app/version.sh@19 -- # get_header_version patch 00:06:44.726 08:19:36 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:44.726 08:19:36 version -- app/version.sh@14 -- # cut -f2 00:06:44.726 08:19:36 version -- app/version.sh@14 -- # tr -d '"' 00:06:44.726 08:19:36 version -- app/version.sh@19 -- # patch=0 00:06:44.726 08:19:36 version -- app/version.sh@20 -- # get_header_version suffix 00:06:44.726 08:19:36 version -- app/version.sh@14 -- # cut -f2 00:06:44.726 08:19:36 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:44.726 08:19:36 version -- app/version.sh@14 -- # tr -d '"' 00:06:44.726 08:19:36 version -- app/version.sh@20 -- # suffix=-pre 00:06:44.726 08:19:36 version -- app/version.sh@22 -- # version=24.9 00:06:44.726 08:19:36 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:44.726 08:19:36 version -- app/version.sh@28 -- # version=24.9rc0 00:06:44.726 08:19:36 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:44.726 08:19:36 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:44.726 08:19:36 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:44.726 08:19:36 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:44.726 00:06:44.726 real 0m0.153s 00:06:44.726 user 0m0.082s 00:06:44.726 sys 0m0.104s 00:06:44.726 ************************************ 00:06:44.726 END TEST version 00:06:44.726 ************************************ 00:06:44.726 08:19:36 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:44.726 08:19:36 version -- common/autotest_common.sh@10 -- # set +x 00:06:44.726 08:19:36 -- common/autotest_common.sh@1142 -- # return 0 00:06:44.726 08:19:36 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:06:44.726 08:19:36 -- spdk/autotest.sh@198 -- # uname -s 00:06:44.726 08:19:36 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:06:44.726 08:19:36 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:44.726 08:19:36 -- spdk/autotest.sh@199 -- # [[ 1 -eq 1 ]] 00:06:44.726 08:19:36 -- spdk/autotest.sh@205 -- # [[ 0 -eq 0 ]] 00:06:44.726 08:19:36 -- spdk/autotest.sh@206 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:06:44.726 08:19:36 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:44.726 08:19:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.727 08:19:36 -- common/autotest_common.sh@10 -- # set +x 00:06:44.727 ************************************ 00:06:44.727 START TEST spdk_dd 00:06:44.727 ************************************ 00:06:44.727 08:19:36 spdk_dd -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:06:44.986 * Looking for test storage... 00:06:44.986 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:44.986 08:19:36 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:44.986 08:19:36 spdk_dd -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:44.986 08:19:36 spdk_dd -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:44.986 08:19:36 spdk_dd -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:44.986 08:19:36 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.986 08:19:36 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.986 08:19:36 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.986 08:19:36 spdk_dd -- paths/export.sh@5 -- # export PATH 00:06:44.986 08:19:36 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.986 08:19:36 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:45.245 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:45.245 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:45.245 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:45.245 08:19:37 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:06:45.245 08:19:37 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:06:45.245 08:19:37 spdk_dd -- scripts/common.sh@309 -- # local bdf bdfs 00:06:45.245 08:19:37 spdk_dd -- scripts/common.sh@310 -- # local nvmes 00:06:45.245 08:19:37 spdk_dd -- scripts/common.sh@312 -- # [[ -n '' ]] 00:06:45.245 08:19:37 spdk_dd -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:06:45.245 08:19:37 spdk_dd -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:06:45.245 08:19:37 spdk_dd -- scripts/common.sh@295 -- # local bdf= 00:06:45.245 08:19:37 spdk_dd -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:06:45.245 08:19:37 spdk_dd -- scripts/common.sh@230 -- # local class 00:06:45.245 08:19:37 spdk_dd -- scripts/common.sh@231 -- # local subclass 00:06:45.245 08:19:37 spdk_dd -- scripts/common.sh@232 -- # local progif 00:06:45.245 08:19:37 spdk_dd -- scripts/common.sh@233 -- # printf %02x 1 00:06:45.245 08:19:37 spdk_dd -- scripts/common.sh@233 -- # class=01 00:06:45.245 08:19:37 spdk_dd -- scripts/common.sh@234 -- # printf %02x 8 00:06:45.245 08:19:37 spdk_dd -- scripts/common.sh@234 -- # subclass=08 00:06:45.245 08:19:37 spdk_dd -- scripts/common.sh@235 -- # printf %02x 2 00:06:45.245 08:19:37 spdk_dd -- scripts/common.sh@235 -- # progif=02 00:06:45.245 08:19:37 spdk_dd -- scripts/common.sh@237 -- # hash lspci 00:06:45.245 08:19:37 spdk_dd -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:06:45.245 08:19:37 spdk_dd -- scripts/common.sh@239 -- # lspci -mm -n -D 00:06:45.245 08:19:37 spdk_dd -- scripts/common.sh@240 -- # grep -i -- -p02 00:06:45.245 08:19:37 spdk_dd -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:06:45.245 08:19:37 spdk_dd -- scripts/common.sh@242 -- # tr -d '"' 00:06:45.245 08:19:37 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:06:45.245 08:19:37 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:06:45.245 08:19:37 spdk_dd -- scripts/common.sh@15 -- # local i 00:06:45.245 08:19:37 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:06:45.245 08:19:37 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:06:45.245 08:19:37 spdk_dd -- scripts/common.sh@24 -- # return 0 00:06:45.245 08:19:37 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:06:45.245 08:19:37 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:06:45.245 08:19:37 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:06:45.245 08:19:37 spdk_dd -- scripts/common.sh@15 -- # local i 00:06:45.245 08:19:37 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:06:45.245 08:19:37 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:06:45.245 08:19:37 spdk_dd -- scripts/common.sh@24 -- # return 0 00:06:45.245 08:19:37 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:06:45.245 08:19:37 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:06:45.245 08:19:37 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:06:45.245 08:19:37 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:06:45.246 08:19:37 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:06:45.246 08:19:37 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:06:45.246 08:19:37 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:06:45.246 08:19:37 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:06:45.246 08:19:37 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:06:45.246 08:19:37 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:06:45.246 08:19:37 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:06:45.246 08:19:37 spdk_dd -- scripts/common.sh@325 -- # (( 2 )) 00:06:45.246 08:19:37 spdk_dd -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:45.246 08:19:37 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@139 -- # local lib so 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.0 == liburing.so.* ]] 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.13.1 == liburing.so.* ]] 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.6.0 == liburing.so.* ]] 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.0 == liburing.so.* ]] 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.14.1 == liburing.so.* ]] 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.1.0 == liburing.so.* ]] 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.15.1 == liburing.so.* ]] 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.15.1 == liburing.so.* ]] 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.4.0 == liburing.so.* ]] 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:06:45.246 08:19:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:45.506 08:19:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.5.0 == liburing.so.* ]] 00:06:45.506 08:19:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:45.506 08:19:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.10.1 == liburing.so.* ]] 00:06:45.506 08:19:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:45.506 08:19:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.10.0 == liburing.so.* ]] 00:06:45.506 08:19:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:45.506 08:19:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.1.0 == liburing.so.* ]] 00:06:45.506 08:19:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:45.506 08:19:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:06:45.506 08:19:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:45.506 08:19:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:06:45.506 08:19:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:45.506 08:19:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:06:45.506 08:19:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:45.506 08:19:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.9.1 == liburing.so.* ]] 00:06:45.506 08:19:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:45.506 08:19:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.0 == liburing.so.* ]] 00:06:45.506 08:19:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:45.506 08:19:37 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:06:45.506 08:19:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:45.506 08:19:37 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:06:45.506 08:19:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:45.506 08:19:37 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:06:45.506 08:19:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:45.506 08:19:37 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:06:45.506 08:19:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:45.507 08:19:37 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:06:45.507 08:19:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:45.507 08:19:37 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:06:45.507 08:19:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:45.507 08:19:37 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:06:45.507 08:19:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:45.507 08:19:37 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:06:45.507 08:19:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:45.507 08:19:37 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:06:45.507 08:19:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:45.507 08:19:37 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:06:45.507 08:19:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:45.507 08:19:37 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:06:45.507 08:19:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:45.507 08:19:37 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:06:45.507 08:19:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:45.507 08:19:37 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:06:45.507 08:19:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:45.507 08:19:37 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:06:45.507 08:19:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:45.507 08:19:37 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:06:45.507 08:19:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:45.507 08:19:37 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:06:45.507 08:19:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:45.507 08:19:37 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:06:45.507 08:19:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:45.507 08:19:37 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:06:45.507 08:19:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:45.507 08:19:37 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:06:45.507 08:19:37 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:06:45.507 * spdk_dd linked to liburing 00:06:45.507 08:19:37 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:06:45.507 08:19:37 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:06:45.507 08:19:37 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:45.507 08:19:37 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:45.507 08:19:37 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:45.507 08:19:37 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:45.507 08:19:37 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:06:45.507 08:19:37 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:45.507 08:19:37 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:45.507 08:19:37 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:45.507 08:19:37 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:45.507 08:19:37 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:45.507 08:19:37 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:45.507 08:19:37 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:45.507 08:19:37 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:45.507 08:19:37 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:45.507 08:19:37 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:45.507 08:19:37 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:45.507 08:19:37 spdk_dd -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:45.507 08:19:37 spdk_dd -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:45.507 08:19:37 spdk_dd -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:45.507 08:19:37 spdk_dd -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:45.507 08:19:37 spdk_dd -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:45.507 08:19:37 spdk_dd -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:45.507 08:19:37 spdk_dd -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:45.507 08:19:37 spdk_dd -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:45.507 08:19:37 spdk_dd -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:45.507 08:19:37 spdk_dd -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:45.507 08:19:37 spdk_dd -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:45.507 08:19:37 spdk_dd -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:45.507 08:19:37 spdk_dd -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:45.507 08:19:37 spdk_dd -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:45.507 08:19:37 spdk_dd -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:45.507 08:19:37 spdk_dd -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:45.507 08:19:37 spdk_dd -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:45.507 08:19:37 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:45.507 08:19:37 spdk_dd -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:45.507 08:19:37 spdk_dd -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:06:45.507 08:19:37 spdk_dd -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:45.507 08:19:37 spdk_dd -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:45.507 08:19:37 spdk_dd -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:45.507 08:19:37 spdk_dd -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:45.507 08:19:37 spdk_dd -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:45.507 08:19:37 spdk_dd -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:45.507 08:19:37 spdk_dd -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:45.507 08:19:37 spdk_dd -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:45.507 08:19:37 spdk_dd -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:45.507 08:19:37 spdk_dd -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:06:45.507 08:19:37 spdk_dd -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:06:45.507 08:19:37 spdk_dd -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:06:45.507 08:19:37 spdk_dd -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:45.507 08:19:37 spdk_dd -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:06:45.507 08:19:37 spdk_dd -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:06:45.507 08:19:37 spdk_dd -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:06:45.507 08:19:37 spdk_dd -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:06:45.507 08:19:37 spdk_dd -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:06:45.507 08:19:37 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=y 00:06:45.507 08:19:37 spdk_dd -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:06:45.507 08:19:37 spdk_dd -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:06:45.507 08:19:37 spdk_dd -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:06:45.507 08:19:37 spdk_dd -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:06:45.507 08:19:37 spdk_dd -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:06:45.507 08:19:37 spdk_dd -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:06:45.507 08:19:37 spdk_dd -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:06:45.507 08:19:37 spdk_dd -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:06:45.507 08:19:37 spdk_dd -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:06:45.507 08:19:37 spdk_dd -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:06:45.507 08:19:37 spdk_dd -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:06:45.507 08:19:37 spdk_dd -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:06:45.507 08:19:37 spdk_dd -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:06:45.507 08:19:37 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:45.507 08:19:37 spdk_dd -- common/build_config.sh@70 -- # CONFIG_FC=n 00:06:45.507 08:19:37 spdk_dd -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:06:45.507 08:19:37 spdk_dd -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:06:45.507 08:19:37 spdk_dd -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:06:45.507 08:19:37 spdk_dd -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:06:45.507 08:19:37 spdk_dd -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:06:45.507 08:19:37 spdk_dd -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:06:45.507 08:19:37 spdk_dd -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:06:45.507 08:19:37 spdk_dd -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:06:45.507 08:19:37 spdk_dd -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:06:45.507 08:19:37 spdk_dd -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:06:45.507 08:19:37 spdk_dd -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:45.507 08:19:37 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:06:45.507 08:19:37 spdk_dd -- common/build_config.sh@83 -- # CONFIG_URING=y 00:06:45.507 08:19:37 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:06:45.507 08:19:37 spdk_dd -- dd/common.sh@152 -- # [[ ! -e /usr/lib64/liburing.so.2 ]] 00:06:45.507 08:19:37 spdk_dd -- dd/common.sh@156 -- # export liburing_in_use=1 00:06:45.507 08:19:37 spdk_dd -- dd/common.sh@156 -- # liburing_in_use=1 00:06:45.507 08:19:37 spdk_dd -- dd/common.sh@157 -- # return 0 00:06:45.507 08:19:37 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:06:45.507 08:19:37 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:06:45.507 08:19:37 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:45.507 08:19:37 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.507 08:19:37 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:45.507 ************************************ 00:06:45.507 START TEST spdk_dd_basic_rw 00:06:45.507 ************************************ 00:06:45.507 08:19:37 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:06:45.507 * Looking for test storage... 00:06:45.507 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:45.507 08:19:37 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:45.507 08:19:37 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:45.507 08:19:37 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:45.507 08:19:37 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:45.507 08:19:37 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.508 08:19:37 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.508 08:19:37 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.508 08:19:37 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:06:45.508 08:19:37 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.508 08:19:37 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:06:45.508 08:19:37 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:06:45.508 08:19:37 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:06:45.508 08:19:37 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:06:45.508 08:19:37 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:06:45.508 08:19:37 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:06:45.508 08:19:37 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:06:45.508 08:19:37 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:45.508 08:19:37 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:45.508 08:19:37 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:06:45.508 08:19:37 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:06:45.508 08:19:37 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:06:45.508 08:19:37 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:06:45.769 08:19:37 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:06:45.769 08:19:37 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:06:45.770 08:19:37 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:06:45.770 08:19:37 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:06:45.770 08:19:37 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:06:45.770 08:19:37 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:06:45.770 08:19:37 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:06:45.770 08:19:37 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:45.770 08:19:37 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:06:45.770 08:19:37 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:45.770 08:19:37 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:45.770 08:19:37 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:45.770 08:19:37 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.770 08:19:37 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:45.770 ************************************ 00:06:45.770 START TEST dd_bs_lt_native_bs 00:06:45.770 ************************************ 00:06:45.770 08:19:37 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1123 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:45.770 08:19:37 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@648 -- # local es=0 00:06:45.770 08:19:37 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:45.770 08:19:37 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:45.770 08:19:37 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:45.770 08:19:37 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:45.770 08:19:37 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:45.770 08:19:37 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:45.770 08:19:37 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:45.770 08:19:37 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:45.770 08:19:37 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:45.770 08:19:37 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:45.770 { 00:06:45.770 "subsystems": [ 00:06:45.770 { 00:06:45.770 "subsystem": "bdev", 00:06:45.770 "config": [ 00:06:45.770 { 00:06:45.770 "params": { 00:06:45.770 "trtype": "pcie", 00:06:45.770 "traddr": "0000:00:10.0", 00:06:45.770 "name": "Nvme0" 00:06:45.770 }, 00:06:45.770 "method": "bdev_nvme_attach_controller" 00:06:45.770 }, 00:06:45.770 { 00:06:45.770 "method": "bdev_wait_for_examine" 00:06:45.770 } 00:06:45.770 ] 00:06:45.770 } 00:06:45.770 ] 00:06:45.770 } 00:06:45.770 [2024-07-15 08:19:37.791697] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:45.770 [2024-07-15 08:19:37.792028] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62623 ] 00:06:45.770 [2024-07-15 08:19:37.931814] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.079 [2024-07-15 08:19:38.061386] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.079 [2024-07-15 08:19:38.117472] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:46.079 [2024-07-15 08:19:38.222915] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:06:46.079 [2024-07-15 08:19:38.222968] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:46.338 [2024-07-15 08:19:38.341429] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:46.338 08:19:38 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@651 -- # es=234 00:06:46.338 08:19:38 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:46.338 08:19:38 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@660 -- # es=106 00:06:46.338 08:19:38 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@661 -- # case "$es" in 00:06:46.338 08:19:38 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@668 -- # es=1 00:06:46.338 08:19:38 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:46.338 00:06:46.338 real 0m0.707s 00:06:46.338 user 0m0.496s 00:06:46.338 sys 0m0.164s 00:06:46.338 08:19:38 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:46.338 ************************************ 00:06:46.338 END TEST dd_bs_lt_native_bs 00:06:46.338 ************************************ 00:06:46.338 08:19:38 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:06:46.338 08:19:38 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 00:06:46.338 08:19:38 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:06:46.338 08:19:38 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:46.338 08:19:38 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.338 08:19:38 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:46.338 ************************************ 00:06:46.338 START TEST dd_rw 00:06:46.338 ************************************ 00:06:46.338 08:19:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1123 -- # basic_rw 4096 00:06:46.338 08:19:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:06:46.338 08:19:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:06:46.338 08:19:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:06:46.338 08:19:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:06:46.338 08:19:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:46.338 08:19:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:46.338 08:19:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:46.338 08:19:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:46.338 08:19:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:46.338 08:19:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:46.338 08:19:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:46.338 08:19:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:46.338 08:19:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:06:46.338 08:19:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:06:46.338 08:19:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:06:46.338 08:19:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:06:46.338 08:19:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:46.338 08:19:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:47.272 08:19:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:06:47.272 08:19:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:47.272 08:19:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:47.272 08:19:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:47.273 [2024-07-15 08:19:39.214690] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:47.273 [2024-07-15 08:19:39.215033] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62654 ] 00:06:47.273 { 00:06:47.273 "subsystems": [ 00:06:47.273 { 00:06:47.273 "subsystem": "bdev", 00:06:47.273 "config": [ 00:06:47.273 { 00:06:47.273 "params": { 00:06:47.273 "trtype": "pcie", 00:06:47.273 "traddr": "0000:00:10.0", 00:06:47.273 "name": "Nvme0" 00:06:47.273 }, 00:06:47.273 "method": "bdev_nvme_attach_controller" 00:06:47.273 }, 00:06:47.273 { 00:06:47.273 "method": "bdev_wait_for_examine" 00:06:47.273 } 00:06:47.273 ] 00:06:47.273 } 00:06:47.273 ] 00:06:47.273 } 00:06:47.273 [2024-07-15 08:19:39.356504] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.532 [2024-07-15 08:19:39.498043] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.532 [2024-07-15 08:19:39.557831] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:47.790  Copying: 60/60 [kB] (average 29 MBps) 00:06:47.791 00:06:47.791 08:19:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:06:47.791 08:19:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:47.791 08:19:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:47.791 08:19:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:47.791 { 00:06:47.791 "subsystems": [ 00:06:47.791 { 00:06:47.791 "subsystem": "bdev", 00:06:47.791 "config": [ 00:06:47.791 { 00:06:47.791 "params": { 00:06:47.791 "trtype": "pcie", 00:06:47.791 "traddr": "0000:00:10.0", 00:06:47.791 "name": "Nvme0" 00:06:47.791 }, 00:06:47.791 "method": "bdev_nvme_attach_controller" 00:06:47.791 }, 00:06:47.791 { 00:06:47.791 "method": "bdev_wait_for_examine" 00:06:47.791 } 00:06:47.791 ] 00:06:47.791 } 00:06:47.791 ] 00:06:47.791 } 00:06:47.791 [2024-07-15 08:19:39.943870] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:47.791 [2024-07-15 08:19:39.943969] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62673 ] 00:06:48.049 [2024-07-15 08:19:40.082462] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.049 [2024-07-15 08:19:40.202534] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.307 [2024-07-15 08:19:40.257501] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:48.565  Copying: 60/60 [kB] (average 29 MBps) 00:06:48.565 00:06:48.565 08:19:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:48.565 08:19:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:06:48.565 08:19:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:48.565 08:19:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:48.565 08:19:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:06:48.565 08:19:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:48.565 08:19:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:48.565 08:19:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:48.565 08:19:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:48.565 08:19:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:48.565 08:19:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:48.565 [2024-07-15 08:19:40.643607] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:48.565 [2024-07-15 08:19:40.643703] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62694 ] 00:06:48.565 { 00:06:48.566 "subsystems": [ 00:06:48.566 { 00:06:48.566 "subsystem": "bdev", 00:06:48.566 "config": [ 00:06:48.566 { 00:06:48.566 "params": { 00:06:48.566 "trtype": "pcie", 00:06:48.566 "traddr": "0000:00:10.0", 00:06:48.566 "name": "Nvme0" 00:06:48.566 }, 00:06:48.566 "method": "bdev_nvme_attach_controller" 00:06:48.566 }, 00:06:48.566 { 00:06:48.566 "method": "bdev_wait_for_examine" 00:06:48.566 } 00:06:48.566 ] 00:06:48.566 } 00:06:48.566 ] 00:06:48.566 } 00:06:48.824 [2024-07-15 08:19:40.775384] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.824 [2024-07-15 08:19:40.895358] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.824 [2024-07-15 08:19:40.950211] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:49.340  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:49.340 00:06:49.340 08:19:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:49.340 08:19:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:06:49.340 08:19:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:06:49.340 08:19:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:06:49.340 08:19:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:06:49.340 08:19:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:49.340 08:19:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:49.906 08:19:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:06:49.906 08:19:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:49.906 08:19:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:49.906 08:19:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:49.906 [2024-07-15 08:19:42.029929] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:49.906 [2024-07-15 08:19:42.030350] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62713 ] 00:06:49.906 { 00:06:49.906 "subsystems": [ 00:06:49.906 { 00:06:49.906 "subsystem": "bdev", 00:06:49.906 "config": [ 00:06:49.906 { 00:06:49.906 "params": { 00:06:49.906 "trtype": "pcie", 00:06:49.906 "traddr": "0000:00:10.0", 00:06:49.906 "name": "Nvme0" 00:06:49.906 }, 00:06:49.906 "method": "bdev_nvme_attach_controller" 00:06:49.906 }, 00:06:49.906 { 00:06:49.906 "method": "bdev_wait_for_examine" 00:06:49.906 } 00:06:49.906 ] 00:06:49.906 } 00:06:49.906 ] 00:06:49.906 } 00:06:50.165 [2024-07-15 08:19:42.169617] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.165 [2024-07-15 08:19:42.287793] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.424 [2024-07-15 08:19:42.342988] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:50.724  Copying: 60/60 [kB] (average 58 MBps) 00:06:50.724 00:06:50.724 08:19:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:06:50.724 08:19:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:50.724 08:19:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:50.724 08:19:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:50.724 [2024-07-15 08:19:42.720022] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:50.724 [2024-07-15 08:19:42.720115] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62732 ] 00:06:50.724 { 00:06:50.724 "subsystems": [ 00:06:50.724 { 00:06:50.724 "subsystem": "bdev", 00:06:50.724 "config": [ 00:06:50.724 { 00:06:50.724 "params": { 00:06:50.724 "trtype": "pcie", 00:06:50.724 "traddr": "0000:00:10.0", 00:06:50.724 "name": "Nvme0" 00:06:50.724 }, 00:06:50.724 "method": "bdev_nvme_attach_controller" 00:06:50.724 }, 00:06:50.724 { 00:06:50.724 "method": "bdev_wait_for_examine" 00:06:50.724 } 00:06:50.724 ] 00:06:50.724 } 00:06:50.724 ] 00:06:50.724 } 00:06:50.724 [2024-07-15 08:19:42.854863] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.983 [2024-07-15 08:19:42.999071] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.983 [2024-07-15 08:19:43.056473] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:51.242  Copying: 60/60 [kB] (average 58 MBps) 00:06:51.242 00:06:51.242 08:19:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:51.242 08:19:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:06:51.242 08:19:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:51.242 08:19:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:51.242 08:19:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:06:51.242 08:19:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:51.242 08:19:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:51.242 08:19:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:51.242 08:19:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:51.242 08:19:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:51.242 08:19:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:51.501 [2024-07-15 08:19:43.446734] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:51.501 [2024-07-15 08:19:43.446839] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62748 ] 00:06:51.501 { 00:06:51.501 "subsystems": [ 00:06:51.501 { 00:06:51.501 "subsystem": "bdev", 00:06:51.501 "config": [ 00:06:51.501 { 00:06:51.501 "params": { 00:06:51.501 "trtype": "pcie", 00:06:51.501 "traddr": "0000:00:10.0", 00:06:51.501 "name": "Nvme0" 00:06:51.501 }, 00:06:51.501 "method": "bdev_nvme_attach_controller" 00:06:51.501 }, 00:06:51.501 { 00:06:51.501 "method": "bdev_wait_for_examine" 00:06:51.501 } 00:06:51.501 ] 00:06:51.501 } 00:06:51.501 ] 00:06:51.501 } 00:06:51.501 [2024-07-15 08:19:43.582933] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.759 [2024-07-15 08:19:43.700959] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.759 [2024-07-15 08:19:43.754532] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:52.019  Copying: 1024/1024 [kB] (average 500 MBps) 00:06:52.019 00:06:52.019 08:19:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:52.019 08:19:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:52.019 08:19:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:06:52.019 08:19:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:06:52.019 08:19:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:06:52.019 08:19:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:06:52.019 08:19:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:52.019 08:19:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:52.588 08:19:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:06:52.588 08:19:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:52.588 08:19:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:52.588 08:19:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:52.588 [2024-07-15 08:19:44.753293] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:52.588 [2024-07-15 08:19:44.753896] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62772 ] 00:06:52.588 { 00:06:52.588 "subsystems": [ 00:06:52.588 { 00:06:52.588 "subsystem": "bdev", 00:06:52.588 "config": [ 00:06:52.588 { 00:06:52.588 "params": { 00:06:52.588 "trtype": "pcie", 00:06:52.588 "traddr": "0000:00:10.0", 00:06:52.588 "name": "Nvme0" 00:06:52.588 }, 00:06:52.588 "method": "bdev_nvme_attach_controller" 00:06:52.588 }, 00:06:52.588 { 00:06:52.588 "method": "bdev_wait_for_examine" 00:06:52.588 } 00:06:52.588 ] 00:06:52.588 } 00:06:52.588 ] 00:06:52.588 } 00:06:52.846 [2024-07-15 08:19:44.893591] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.846 [2024-07-15 08:19:45.010522] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.105 [2024-07-15 08:19:45.064037] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:53.363  Copying: 56/56 [kB] (average 27 MBps) 00:06:53.363 00:06:53.363 08:19:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:53.363 08:19:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:06:53.363 08:19:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:53.363 08:19:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:53.363 [2024-07-15 08:19:45.443260] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:53.364 [2024-07-15 08:19:45.443358] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62786 ] 00:06:53.364 { 00:06:53.364 "subsystems": [ 00:06:53.364 { 00:06:53.364 "subsystem": "bdev", 00:06:53.364 "config": [ 00:06:53.364 { 00:06:53.364 "params": { 00:06:53.364 "trtype": "pcie", 00:06:53.364 "traddr": "0000:00:10.0", 00:06:53.364 "name": "Nvme0" 00:06:53.364 }, 00:06:53.364 "method": "bdev_nvme_attach_controller" 00:06:53.364 }, 00:06:53.364 { 00:06:53.364 "method": "bdev_wait_for_examine" 00:06:53.364 } 00:06:53.364 ] 00:06:53.364 } 00:06:53.364 ] 00:06:53.364 } 00:06:53.622 [2024-07-15 08:19:45.581628] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.622 [2024-07-15 08:19:45.698955] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.622 [2024-07-15 08:19:45.751552] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:54.138  Copying: 56/56 [kB] (average 27 MBps) 00:06:54.138 00:06:54.138 08:19:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:54.138 08:19:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:06:54.138 08:19:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:54.138 08:19:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:54.138 08:19:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:06:54.138 08:19:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:54.138 08:19:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:54.138 08:19:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:54.138 08:19:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:54.138 08:19:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:54.138 08:19:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:54.138 { 00:06:54.138 "subsystems": [ 00:06:54.138 { 00:06:54.138 "subsystem": "bdev", 00:06:54.138 "config": [ 00:06:54.138 { 00:06:54.138 "params": { 00:06:54.138 "trtype": "pcie", 00:06:54.138 "traddr": "0000:00:10.0", 00:06:54.138 "name": "Nvme0" 00:06:54.138 }, 00:06:54.138 "method": "bdev_nvme_attach_controller" 00:06:54.138 }, 00:06:54.138 { 00:06:54.138 "method": "bdev_wait_for_examine" 00:06:54.138 } 00:06:54.138 ] 00:06:54.138 } 00:06:54.138 ] 00:06:54.138 } 00:06:54.138 [2024-07-15 08:19:46.145450] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:54.138 [2024-07-15 08:19:46.145566] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62801 ] 00:06:54.138 [2024-07-15 08:19:46.291945] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.396 [2024-07-15 08:19:46.428969] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.396 [2024-07-15 08:19:46.487686] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:54.654  Copying: 1024/1024 [kB] (average 500 MBps) 00:06:54.654 00:06:54.912 08:19:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:54.912 08:19:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:06:54.912 08:19:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:06:54.912 08:19:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:06:54.912 08:19:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:06:54.912 08:19:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:54.912 08:19:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:55.485 08:19:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:06:55.485 08:19:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:55.486 08:19:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:55.486 08:19:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:55.486 [2024-07-15 08:19:47.497703] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:55.486 [2024-07-15 08:19:47.498066] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62828 ] 00:06:55.486 { 00:06:55.486 "subsystems": [ 00:06:55.486 { 00:06:55.486 "subsystem": "bdev", 00:06:55.486 "config": [ 00:06:55.486 { 00:06:55.486 "params": { 00:06:55.486 "trtype": "pcie", 00:06:55.486 "traddr": "0000:00:10.0", 00:06:55.486 "name": "Nvme0" 00:06:55.486 }, 00:06:55.486 "method": "bdev_nvme_attach_controller" 00:06:55.486 }, 00:06:55.486 { 00:06:55.486 "method": "bdev_wait_for_examine" 00:06:55.486 } 00:06:55.486 ] 00:06:55.486 } 00:06:55.486 ] 00:06:55.486 } 00:06:55.486 [2024-07-15 08:19:47.635633] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.765 [2024-07-15 08:19:47.752953] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.765 [2024-07-15 08:19:47.805846] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:56.025  Copying: 56/56 [kB] (average 54 MBps) 00:06:56.025 00:06:56.025 08:19:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:06:56.025 08:19:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:56.025 08:19:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:56.025 08:19:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:56.025 [2024-07-15 08:19:48.184904] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:56.025 [2024-07-15 08:19:48.185005] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62841 ] 00:06:56.025 { 00:06:56.025 "subsystems": [ 00:06:56.025 { 00:06:56.025 "subsystem": "bdev", 00:06:56.025 "config": [ 00:06:56.025 { 00:06:56.025 "params": { 00:06:56.025 "trtype": "pcie", 00:06:56.025 "traddr": "0000:00:10.0", 00:06:56.025 "name": "Nvme0" 00:06:56.025 }, 00:06:56.025 "method": "bdev_nvme_attach_controller" 00:06:56.025 }, 00:06:56.025 { 00:06:56.025 "method": "bdev_wait_for_examine" 00:06:56.025 } 00:06:56.025 ] 00:06:56.025 } 00:06:56.025 ] 00:06:56.025 } 00:06:56.284 [2024-07-15 08:19:48.324883] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.284 [2024-07-15 08:19:48.441935] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.542 [2024-07-15 08:19:48.494226] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:56.800  Copying: 56/56 [kB] (average 54 MBps) 00:06:56.800 00:06:56.800 08:19:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:56.800 08:19:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:06:56.800 08:19:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:56.800 08:19:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:56.800 08:19:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:06:56.800 08:19:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:56.800 08:19:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:56.800 08:19:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:56.800 08:19:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:56.800 08:19:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:56.800 08:19:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:56.800 [2024-07-15 08:19:48.877535] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:56.800 [2024-07-15 08:19:48.877634] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62857 ] 00:06:56.800 { 00:06:56.800 "subsystems": [ 00:06:56.800 { 00:06:56.800 "subsystem": "bdev", 00:06:56.800 "config": [ 00:06:56.800 { 00:06:56.800 "params": { 00:06:56.800 "trtype": "pcie", 00:06:56.800 "traddr": "0000:00:10.0", 00:06:56.800 "name": "Nvme0" 00:06:56.800 }, 00:06:56.800 "method": "bdev_nvme_attach_controller" 00:06:56.800 }, 00:06:56.800 { 00:06:56.800 "method": "bdev_wait_for_examine" 00:06:56.800 } 00:06:56.800 ] 00:06:56.800 } 00:06:56.800 ] 00:06:56.800 } 00:06:57.058 [2024-07-15 08:19:49.017194] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.058 [2024-07-15 08:19:49.133579] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.058 [2024-07-15 08:19:49.186781] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:57.573  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:57.573 00:06:57.573 08:19:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:57.573 08:19:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:57.573 08:19:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:06:57.573 08:19:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:06:57.573 08:19:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:06:57.573 08:19:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:06:57.573 08:19:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:57.573 08:19:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:58.140 08:19:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:06:58.140 08:19:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:58.140 08:19:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:58.140 08:19:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:58.140 [2024-07-15 08:19:50.114520] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:58.140 [2024-07-15 08:19:50.114836] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62881 ] 00:06:58.140 { 00:06:58.140 "subsystems": [ 00:06:58.140 { 00:06:58.140 "subsystem": "bdev", 00:06:58.140 "config": [ 00:06:58.140 { 00:06:58.140 "params": { 00:06:58.140 "trtype": "pcie", 00:06:58.140 "traddr": "0000:00:10.0", 00:06:58.140 "name": "Nvme0" 00:06:58.140 }, 00:06:58.140 "method": "bdev_nvme_attach_controller" 00:06:58.140 }, 00:06:58.140 { 00:06:58.140 "method": "bdev_wait_for_examine" 00:06:58.140 } 00:06:58.140 ] 00:06:58.140 } 00:06:58.140 ] 00:06:58.140 } 00:06:58.140 [2024-07-15 08:19:50.251435] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.416 [2024-07-15 08:19:50.379326] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.416 [2024-07-15 08:19:50.435459] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:58.673  Copying: 48/48 [kB] (average 46 MBps) 00:06:58.673 00:06:58.673 08:19:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:06:58.673 08:19:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:58.673 08:19:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:58.673 08:19:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:58.673 { 00:06:58.673 "subsystems": [ 00:06:58.673 { 00:06:58.673 "subsystem": "bdev", 00:06:58.673 "config": [ 00:06:58.673 { 00:06:58.673 "params": { 00:06:58.673 "trtype": "pcie", 00:06:58.673 "traddr": "0000:00:10.0", 00:06:58.673 "name": "Nvme0" 00:06:58.673 }, 00:06:58.673 "method": "bdev_nvme_attach_controller" 00:06:58.673 }, 00:06:58.673 { 00:06:58.673 "method": "bdev_wait_for_examine" 00:06:58.673 } 00:06:58.673 ] 00:06:58.673 } 00:06:58.673 ] 00:06:58.673 } 00:06:58.673 [2024-07-15 08:19:50.813948] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:58.673 [2024-07-15 08:19:50.814032] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62895 ] 00:06:58.933 [2024-07-15 08:19:50.947618] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.933 [2024-07-15 08:19:51.065784] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.193 [2024-07-15 08:19:51.118479] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:59.449  Copying: 48/48 [kB] (average 23 MBps) 00:06:59.449 00:06:59.449 08:19:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:59.449 08:19:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:06:59.449 08:19:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:59.449 08:19:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:59.449 08:19:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:06:59.449 08:19:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:59.449 08:19:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:59.449 08:19:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:59.449 08:19:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:59.449 08:19:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:59.449 08:19:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:59.449 [2024-07-15 08:19:51.522541] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:59.449 [2024-07-15 08:19:51.522688] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62910 ] 00:06:59.449 { 00:06:59.449 "subsystems": [ 00:06:59.449 { 00:06:59.449 "subsystem": "bdev", 00:06:59.449 "config": [ 00:06:59.449 { 00:06:59.449 "params": { 00:06:59.449 "trtype": "pcie", 00:06:59.449 "traddr": "0000:00:10.0", 00:06:59.449 "name": "Nvme0" 00:06:59.449 }, 00:06:59.449 "method": "bdev_nvme_attach_controller" 00:06:59.449 }, 00:06:59.449 { 00:06:59.449 "method": "bdev_wait_for_examine" 00:06:59.449 } 00:06:59.449 ] 00:06:59.449 } 00:06:59.449 ] 00:06:59.449 } 00:06:59.705 [2024-07-15 08:19:51.664957] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.705 [2024-07-15 08:19:51.784320] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.705 [2024-07-15 08:19:51.836973] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:00.255  Copying: 1024/1024 [kB] (average 500 MBps) 00:07:00.255 00:07:00.255 08:19:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:00.255 08:19:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:07:00.255 08:19:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:07:00.255 08:19:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:07:00.255 08:19:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:07:00.255 08:19:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:00.255 08:19:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:00.844 08:19:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:07:00.844 08:19:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:00.844 08:19:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:00.844 08:19:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:00.844 { 00:07:00.844 "subsystems": [ 00:07:00.844 { 00:07:00.844 "subsystem": "bdev", 00:07:00.844 "config": [ 00:07:00.844 { 00:07:00.844 "params": { 00:07:00.844 "trtype": "pcie", 00:07:00.844 "traddr": "0000:00:10.0", 00:07:00.844 "name": "Nvme0" 00:07:00.844 }, 00:07:00.844 "method": "bdev_nvme_attach_controller" 00:07:00.844 }, 00:07:00.844 { 00:07:00.844 "method": "bdev_wait_for_examine" 00:07:00.844 } 00:07:00.844 ] 00:07:00.844 } 00:07:00.844 ] 00:07:00.844 } 00:07:00.844 [2024-07-15 08:19:52.799686] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:00.844 [2024-07-15 08:19:52.799788] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62935 ] 00:07:00.844 [2024-07-15 08:19:52.939785] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.101 [2024-07-15 08:19:53.068643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.101 [2024-07-15 08:19:53.123632] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:01.358  Copying: 48/48 [kB] (average 46 MBps) 00:07:01.358 00:07:01.358 08:19:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:07:01.358 08:19:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:01.358 08:19:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:01.358 08:19:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:01.358 { 00:07:01.358 "subsystems": [ 00:07:01.358 { 00:07:01.358 "subsystem": "bdev", 00:07:01.358 "config": [ 00:07:01.358 { 00:07:01.358 "params": { 00:07:01.358 "trtype": "pcie", 00:07:01.358 "traddr": "0000:00:10.0", 00:07:01.358 "name": "Nvme0" 00:07:01.358 }, 00:07:01.358 "method": "bdev_nvme_attach_controller" 00:07:01.358 }, 00:07:01.358 { 00:07:01.358 "method": "bdev_wait_for_examine" 00:07:01.359 } 00:07:01.359 ] 00:07:01.359 } 00:07:01.359 ] 00:07:01.359 } 00:07:01.359 [2024-07-15 08:19:53.517391] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:01.359 [2024-07-15 08:19:53.517491] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62948 ] 00:07:01.616 [2024-07-15 08:19:53.658761] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.875 [2024-07-15 08:19:53.788189] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.875 [2024-07-15 08:19:53.847107] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:02.132  Copying: 48/48 [kB] (average 46 MBps) 00:07:02.132 00:07:02.132 08:19:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:02.132 08:19:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:07:02.132 08:19:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:02.132 08:19:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:02.132 08:19:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:07:02.132 08:19:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:02.133 08:19:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:02.133 08:19:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:02.133 08:19:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:02.133 08:19:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:02.133 08:19:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:02.133 [2024-07-15 08:19:54.242082] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:02.133 [2024-07-15 08:19:54.242191] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62968 ] 00:07:02.133 { 00:07:02.133 "subsystems": [ 00:07:02.133 { 00:07:02.133 "subsystem": "bdev", 00:07:02.133 "config": [ 00:07:02.133 { 00:07:02.133 "params": { 00:07:02.133 "trtype": "pcie", 00:07:02.133 "traddr": "0000:00:10.0", 00:07:02.133 "name": "Nvme0" 00:07:02.133 }, 00:07:02.133 "method": "bdev_nvme_attach_controller" 00:07:02.133 }, 00:07:02.133 { 00:07:02.133 "method": "bdev_wait_for_examine" 00:07:02.133 } 00:07:02.133 ] 00:07:02.133 } 00:07:02.133 ] 00:07:02.133 } 00:07:02.390 [2024-07-15 08:19:54.381095] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.390 [2024-07-15 08:19:54.496662] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.390 [2024-07-15 08:19:54.550634] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:02.907  Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:02.907 00:07:02.907 ************************************ 00:07:02.907 END TEST dd_rw 00:07:02.907 ************************************ 00:07:02.907 00:07:02.907 real 0m16.390s 00:07:02.907 user 0m12.379s 00:07:02.907 sys 0m5.459s 00:07:02.907 08:19:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:02.907 08:19:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:02.907 08:19:54 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 00:07:02.907 08:19:54 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:07:02.907 08:19:54 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:02.907 08:19:54 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:02.907 08:19:54 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:02.907 ************************************ 00:07:02.907 START TEST dd_rw_offset 00:07:02.907 ************************************ 00:07:02.907 08:19:54 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1123 -- # basic_offset 00:07:02.907 08:19:54 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:07:02.907 08:19:54 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:07:02.907 08:19:54 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:07:02.907 08:19:54 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:02.907 08:19:54 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:07:02.907 08:19:54 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=wkj7n1y6erbk7e31eb2s0xfghvb6khlz12uoi50o5bt1rekg8p29i7h6qbrycza2xwznj8edhxsmtmlydnn308szhgu7eynl0o18s8ssc3816yt2rci9edmhjanhnbz46anik2y26rsiy1slu05woxpuq7zvpn6svyn87pgj8c0id9qublisuqxxs57ebussu7nl4f2e0ft6a7259buvbs6lx4tcam0o8781codql20l2chpdbz6gj651e9yql6g9vb8y34sl08iyaokq226jzhu7zxx6ik78309t8an76s3nkifi9iwg3q04dz5c3fkkehd7fdf9hp1fuq32ort2vcpn9rpmwexa4ih1n28r5g9z07kdkha1zmhwhcyu570hykzw2zjw3xrbsjksgstwkry80n51tdcl14zo2ewc06o6t7nj35f4fkgg2qnjnfuu02oxt1q0k8ge0ra0lwkf774v4pbsskzknfaessum2x04zqi7arnbqoi34rweafc3zr9bz6q83dhi5ne12in326f61wzjxrfi8oq7jld6tcr7gng2ztl91mbaqwnkkrgquiwszxywg7u5umejcm8ifrnzezudqraky9shsongjckjma8dnigv860vrwast1ewgs95mvpwehokoq2qqmudnr0qsdac98zoxtwni80w1ov0qgtoriyy3ezg0lzbjpl0w1nwwff3kcbv92z2hinhlat1be0wntilhspsv1h8jdxauj6x9bgyapqqx24csuim2n5z3dghgni4k9d6x2nk26g5zzlyxsu3pow4s5xtip8298wz747m2m80447lbskjec7im3lfb99ljxn363pfclkt1b6inqsumnx4ib3mpymwz6pc17ip63dhyfpocif1jl6qqeiqkutlbznv5u0pb01lrdsc9ro0c3fywl7dvtxqzlyoo49x0tz2inn94hcm1fkzc6bn4skrv5l7cb5t1wuzdkyxxjm59mg939frihbws1dvzy171vh9v6ylidn8uj3s9ikgy19x8p8rzu3rqk7ph8alwhhheu1d6i2sau9thba3xhm8k0e9ygr0kfdayi9msy2hpci5edeiwypovojfcrup6bckb9m2hmb8o97rh50jeerxviaynhqqwj4al02qsk3555b5ebpz9d5fv6o3rksatjj0nte8itphp2ii4knar9g07abhzk4davfvrzi2nhjbhu7salgypxwx0awmbnmmg5xv6d9q2ilgz5l01h0jcdt6gns4sh07jhmy7x4dbilookdxrmhz8mkmd8unzdz4d9qhklrs0wo08m8qu3xwu3g1vvi85h7nrmmiscrk6zt5vxnzf43avibghq2fhky1x20qq54ffkgnalt2dg5shf9vyffz6p20nrgseivv2li9ndtlzz6iuzijio89dy1tvsla2o17fap37aztqxq12vrx6npge6qrmkn1w81fs8qjxz49s5k29b248ysvrqvw4vr0te473rpmr9d5dquaxi37c188myf77h14r5n9zo44pr8k83gj7iilo6i3gy7lkzg9bwk3zwwm5k8az95bmcc1bdrk5j6nbmsqcthhhnolhsahqs12dw6o5od9pyr1hxdz2tqdodo90zgc8vabblgik1w8whpep5kypq8pnvtskdqcpbpkzdvxue8rjrytiv9x2iocaxl2qpmb8dkr6yils65zs6jkacytjm7qc75rr0s991gzyuyvq7tqu4x8rmkzlkdqwqy7lr65k4cwaqazhe9tmtyz1l108faniv2as4ralwmfc3ljoaw8ovm1jaicq0hn7f4tsgfatupn7a3rmgjvyma2v0o9skbo67p38z03ix05a2yqprq6fr0vlhoj1a0kt3889pz9vtsb65ypfp7pga83a9m2hb5wh897aqckts2wm284ugbbys0aywxkxuqebiohw1nl576djyrpphe7ync08gp392mz4a75zgybfg00z9b2wmgygwrme1evmpbhope72hsi70ylld2mukxn85iiqocoejzt49smlq3shia5dtdtnseb0j7ylmzde29oqzrc46dlfevqsk5p66oo6m2qn8nz2had0ti8mh7y5qt3mmnso56lp9t4s7jjmktkrirftt1f38qs7dz66bihbthluytm89qzy7r72dmn0dx1lc1h79kanldzbrq2sqi6z9hxip3msahuoql3er3km95x6pdhx5af1po5w3l4hifc5q813fsz918k74uyzjd842w3091wpml0xe34iudgfqwi6w3nx9my1jcpvgnpk0chb0ixrucu3s13wo7dp6mrallccdrp8ra4qwxagj5v3tm8jrb8xzvylzz8dlixwunky2mh7zovbz808h7papfkd7rb3mxmmpbfnhp80yxyi019dzartszadq2dmda1koveqow80u3runkk0kmo8cludstuere3aqba5ghw795dkpb68edd477u6sguv8qpybw3tw27lmqt2po3s0sll2kabt2jnpamm7yxa6a1ctizkw9q0rp0m7em5eyzf18yqed25vcpyiewhnnxi0nu9n22thwlcjlqfgw2kdaqbygo3pg3kpblzfibzd7o5olumtbpaexga264k9ttqxxun19cw9qw3atx91iz7drawgkkjn1hnbjfryvb386jaizltq0c8ljk3b4rk9k1uow1q669wq1rkjqz0817ojoynu03nwdf5gmtrnhtdlu9jkyjg60rwmq6y69p6rd1emslb17impc0qleg4kl4thqzcyd7ola0mjuc9xwwxyricwdp567rxuinyd21jira93yrrw8trnb2cws6zv1jytne9wacop2qektl0nhg3jjw4dxgrvcuyx7madh2aabbp67sdeyoagmvnqqd1pwh85we70hlqhek9k3fkpx00dwtpjuk9g9ib0vulwh3wfcnjpb8qz1779lulv4vvvckhnjsjvtf3umtwge6gui8g6e3x8f1gcpsl507qkdgjae6c4wihkj9mo6gzo8n8pd8ihiaoufhm47jui33gr9q81sg5u1awp754yjujcnvn75os6ys4w9rc5no39ismefw17b60zzzvj5bhqx40y7q5pmeb3bwxjuqpnd8o4jnk13o9y2zkjyxb1cdydll0vdr7uidayv5yws79utw5cavb22qng649k6buace3hyoqtwzpane41lvc5z9hwgys5q3i1y4x68nujqp5gzthsoixqdcz8k72j9l8u7s0ki2365aacqjf9o66qweug8hpmrk9isr3xmh2k9y6rnashp44xqndbcllcejvnh2a840iov1mb34albmej9fn0mmtexhm8056zd7pikqr1gizmcpza253w4bi2era36u21qqtf4lcf089c0phzy7pe08r6fyngrwcw71px81ppg68w93jm9hk492cy18q7f4a1b38od2sddce4ovhmsxcay8ct7pozmorqnfdfb3zmjtxipk5ppaeezfgxhmduk4l5m1tuxgmny9ztzsrz7c9wz4ypfy8uecx4ukgvti1scs8yqanxvnm072hy6kt8m3770p5p609yy1kxjs7y3g8ve3zo5y9md2ehxkvgyiagdf6ph85fes8auqi07ed04r8i5g4qwgazspzy33xrzbped3ftgr52i13gvholu6cj2qh5bbonxgo8pv168jpq5ir74rhhvawqrhx39zkjmkhwdepu20oasjwu5t4ywzmh1avwkslc9bd4xwga7t9y8xtfg5uu8zioefhov7y1gd34bp1hcfws6va952p53gavd077j1wwal34p72t84z013a55qx1jb9v983y5z3yqu9mqwa625i58p62xy5pnzcr68838e5srnod1n20pdzyip58nk2k4dzxwfqggyh7z15iapawd6huz9kr191e5l7268rqb4k51zw7yljc70cozedf7t6cdq7dzx3yg2we24izi5ivv31ysnw6c6vkqxkhn3oxnzw7ahto3orv51inym9qckkzopvjdzao7wmpsw1zdid06bsbiab235fbpl6qj 00:07:02.908 08:19:54 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:07:02.908 08:19:54 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:07:02.908 08:19:54 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:07:02.908 08:19:54 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:02.908 [2024-07-15 08:19:55.027259] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:02.908 [2024-07-15 08:19:55.027366] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63000 ] 00:07:02.908 { 00:07:02.908 "subsystems": [ 00:07:02.908 { 00:07:02.908 "subsystem": "bdev", 00:07:02.908 "config": [ 00:07:02.908 { 00:07:02.908 "params": { 00:07:02.908 "trtype": "pcie", 00:07:02.908 "traddr": "0000:00:10.0", 00:07:02.908 "name": "Nvme0" 00:07:02.908 }, 00:07:02.908 "method": "bdev_nvme_attach_controller" 00:07:02.908 }, 00:07:02.908 { 00:07:02.908 "method": "bdev_wait_for_examine" 00:07:02.908 } 00:07:02.908 ] 00:07:02.908 } 00:07:02.908 ] 00:07:02.908 } 00:07:03.166 [2024-07-15 08:19:55.166334] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.166 [2024-07-15 08:19:55.286062] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.425 [2024-07-15 08:19:55.341373] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:03.684  Copying: 4096/4096 [B] (average 4000 kBps) 00:07:03.684 00:07:03.684 08:19:55 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:07:03.684 08:19:55 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:07:03.684 08:19:55 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:07:03.684 08:19:55 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:03.684 [2024-07-15 08:19:55.729920] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:03.684 [2024-07-15 08:19:55.730033] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63019 ] 00:07:03.684 { 00:07:03.684 "subsystems": [ 00:07:03.684 { 00:07:03.684 "subsystem": "bdev", 00:07:03.684 "config": [ 00:07:03.684 { 00:07:03.684 "params": { 00:07:03.684 "trtype": "pcie", 00:07:03.684 "traddr": "0000:00:10.0", 00:07:03.684 "name": "Nvme0" 00:07:03.684 }, 00:07:03.684 "method": "bdev_nvme_attach_controller" 00:07:03.684 }, 00:07:03.684 { 00:07:03.684 "method": "bdev_wait_for_examine" 00:07:03.684 } 00:07:03.684 ] 00:07:03.684 } 00:07:03.684 ] 00:07:03.684 } 00:07:03.941 [2024-07-15 08:19:55.868898] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.941 [2024-07-15 08:19:55.988493] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.941 [2024-07-15 08:19:56.042196] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:04.200  Copying: 4096/4096 [B] (average 4000 kBps) 00:07:04.200 00:07:04.459 08:19:56 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:07:04.459 08:19:56 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ wkj7n1y6erbk7e31eb2s0xfghvb6khlz12uoi50o5bt1rekg8p29i7h6qbrycza2xwznj8edhxsmtmlydnn308szhgu7eynl0o18s8ssc3816yt2rci9edmhjanhnbz46anik2y26rsiy1slu05woxpuq7zvpn6svyn87pgj8c0id9qublisuqxxs57ebussu7nl4f2e0ft6a7259buvbs6lx4tcam0o8781codql20l2chpdbz6gj651e9yql6g9vb8y34sl08iyaokq226jzhu7zxx6ik78309t8an76s3nkifi9iwg3q04dz5c3fkkehd7fdf9hp1fuq32ort2vcpn9rpmwexa4ih1n28r5g9z07kdkha1zmhwhcyu570hykzw2zjw3xrbsjksgstwkry80n51tdcl14zo2ewc06o6t7nj35f4fkgg2qnjnfuu02oxt1q0k8ge0ra0lwkf774v4pbsskzknfaessum2x04zqi7arnbqoi34rweafc3zr9bz6q83dhi5ne12in326f61wzjxrfi8oq7jld6tcr7gng2ztl91mbaqwnkkrgquiwszxywg7u5umejcm8ifrnzezudqraky9shsongjckjma8dnigv860vrwast1ewgs95mvpwehokoq2qqmudnr0qsdac98zoxtwni80w1ov0qgtoriyy3ezg0lzbjpl0w1nwwff3kcbv92z2hinhlat1be0wntilhspsv1h8jdxauj6x9bgyapqqx24csuim2n5z3dghgni4k9d6x2nk26g5zzlyxsu3pow4s5xtip8298wz747m2m80447lbskjec7im3lfb99ljxn363pfclkt1b6inqsumnx4ib3mpymwz6pc17ip63dhyfpocif1jl6qqeiqkutlbznv5u0pb01lrdsc9ro0c3fywl7dvtxqzlyoo49x0tz2inn94hcm1fkzc6bn4skrv5l7cb5t1wuzdkyxxjm59mg939frihbws1dvzy171vh9v6ylidn8uj3s9ikgy19x8p8rzu3rqk7ph8alwhhheu1d6i2sau9thba3xhm8k0e9ygr0kfdayi9msy2hpci5edeiwypovojfcrup6bckb9m2hmb8o97rh50jeerxviaynhqqwj4al02qsk3555b5ebpz9d5fv6o3rksatjj0nte8itphp2ii4knar9g07abhzk4davfvrzi2nhjbhu7salgypxwx0awmbnmmg5xv6d9q2ilgz5l01h0jcdt6gns4sh07jhmy7x4dbilookdxrmhz8mkmd8unzdz4d9qhklrs0wo08m8qu3xwu3g1vvi85h7nrmmiscrk6zt5vxnzf43avibghq2fhky1x20qq54ffkgnalt2dg5shf9vyffz6p20nrgseivv2li9ndtlzz6iuzijio89dy1tvsla2o17fap37aztqxq12vrx6npge6qrmkn1w81fs8qjxz49s5k29b248ysvrqvw4vr0te473rpmr9d5dquaxi37c188myf77h14r5n9zo44pr8k83gj7iilo6i3gy7lkzg9bwk3zwwm5k8az95bmcc1bdrk5j6nbmsqcthhhnolhsahqs12dw6o5od9pyr1hxdz2tqdodo90zgc8vabblgik1w8whpep5kypq8pnvtskdqcpbpkzdvxue8rjrytiv9x2iocaxl2qpmb8dkr6yils65zs6jkacytjm7qc75rr0s991gzyuyvq7tqu4x8rmkzlkdqwqy7lr65k4cwaqazhe9tmtyz1l108faniv2as4ralwmfc3ljoaw8ovm1jaicq0hn7f4tsgfatupn7a3rmgjvyma2v0o9skbo67p38z03ix05a2yqprq6fr0vlhoj1a0kt3889pz9vtsb65ypfp7pga83a9m2hb5wh897aqckts2wm284ugbbys0aywxkxuqebiohw1nl576djyrpphe7ync08gp392mz4a75zgybfg00z9b2wmgygwrme1evmpbhope72hsi70ylld2mukxn85iiqocoejzt49smlq3shia5dtdtnseb0j7ylmzde29oqzrc46dlfevqsk5p66oo6m2qn8nz2had0ti8mh7y5qt3mmnso56lp9t4s7jjmktkrirftt1f38qs7dz66bihbthluytm89qzy7r72dmn0dx1lc1h79kanldzbrq2sqi6z9hxip3msahuoql3er3km95x6pdhx5af1po5w3l4hifc5q813fsz918k74uyzjd842w3091wpml0xe34iudgfqwi6w3nx9my1jcpvgnpk0chb0ixrucu3s13wo7dp6mrallccdrp8ra4qwxagj5v3tm8jrb8xzvylzz8dlixwunky2mh7zovbz808h7papfkd7rb3mxmmpbfnhp80yxyi019dzartszadq2dmda1koveqow80u3runkk0kmo8cludstuere3aqba5ghw795dkpb68edd477u6sguv8qpybw3tw27lmqt2po3s0sll2kabt2jnpamm7yxa6a1ctizkw9q0rp0m7em5eyzf18yqed25vcpyiewhnnxi0nu9n22thwlcjlqfgw2kdaqbygo3pg3kpblzfibzd7o5olumtbpaexga264k9ttqxxun19cw9qw3atx91iz7drawgkkjn1hnbjfryvb386jaizltq0c8ljk3b4rk9k1uow1q669wq1rkjqz0817ojoynu03nwdf5gmtrnhtdlu9jkyjg60rwmq6y69p6rd1emslb17impc0qleg4kl4thqzcyd7ola0mjuc9xwwxyricwdp567rxuinyd21jira93yrrw8trnb2cws6zv1jytne9wacop2qektl0nhg3jjw4dxgrvcuyx7madh2aabbp67sdeyoagmvnqqd1pwh85we70hlqhek9k3fkpx00dwtpjuk9g9ib0vulwh3wfcnjpb8qz1779lulv4vvvckhnjsjvtf3umtwge6gui8g6e3x8f1gcpsl507qkdgjae6c4wihkj9mo6gzo8n8pd8ihiaoufhm47jui33gr9q81sg5u1awp754yjujcnvn75os6ys4w9rc5no39ismefw17b60zzzvj5bhqx40y7q5pmeb3bwxjuqpnd8o4jnk13o9y2zkjyxb1cdydll0vdr7uidayv5yws79utw5cavb22qng649k6buace3hyoqtwzpane41lvc5z9hwgys5q3i1y4x68nujqp5gzthsoixqdcz8k72j9l8u7s0ki2365aacqjf9o66qweug8hpmrk9isr3xmh2k9y6rnashp44xqndbcllcejvnh2a840iov1mb34albmej9fn0mmtexhm8056zd7pikqr1gizmcpza253w4bi2era36u21qqtf4lcf089c0phzy7pe08r6fyngrwcw71px81ppg68w93jm9hk492cy18q7f4a1b38od2sddce4ovhmsxcay8ct7pozmorqnfdfb3zmjtxipk5ppaeezfgxhmduk4l5m1tuxgmny9ztzsrz7c9wz4ypfy8uecx4ukgvti1scs8yqanxvnm072hy6kt8m3770p5p609yy1kxjs7y3g8ve3zo5y9md2ehxkvgyiagdf6ph85fes8auqi07ed04r8i5g4qwgazspzy33xrzbped3ftgr52i13gvholu6cj2qh5bbonxgo8pv168jpq5ir74rhhvawqrhx39zkjmkhwdepu20oasjwu5t4ywzmh1avwkslc9bd4xwga7t9y8xtfg5uu8zioefhov7y1gd34bp1hcfws6va952p53gavd077j1wwal34p72t84z013a55qx1jb9v983y5z3yqu9mqwa625i58p62xy5pnzcr68838e5srnod1n20pdzyip58nk2k4dzxwfqggyh7z15iapawd6huz9kr191e5l7268rqb4k51zw7yljc70cozedf7t6cdq7dzx3yg2we24izi5ivv31ysnw6c6vkqxkhn3oxnzw7ahto3orv51inym9qckkzopvjdzao7wmpsw1zdid06bsbiab235fbpl6qj == \w\k\j\7\n\1\y\6\e\r\b\k\7\e\3\1\e\b\2\s\0\x\f\g\h\v\b\6\k\h\l\z\1\2\u\o\i\5\0\o\5\b\t\1\r\e\k\g\8\p\2\9\i\7\h\6\q\b\r\y\c\z\a\2\x\w\z\n\j\8\e\d\h\x\s\m\t\m\l\y\d\n\n\3\0\8\s\z\h\g\u\7\e\y\n\l\0\o\1\8\s\8\s\s\c\3\8\1\6\y\t\2\r\c\i\9\e\d\m\h\j\a\n\h\n\b\z\4\6\a\n\i\k\2\y\2\6\r\s\i\y\1\s\l\u\0\5\w\o\x\p\u\q\7\z\v\p\n\6\s\v\y\n\8\7\p\g\j\8\c\0\i\d\9\q\u\b\l\i\s\u\q\x\x\s\5\7\e\b\u\s\s\u\7\n\l\4\f\2\e\0\f\t\6\a\7\2\5\9\b\u\v\b\s\6\l\x\4\t\c\a\m\0\o\8\7\8\1\c\o\d\q\l\2\0\l\2\c\h\p\d\b\z\6\g\j\6\5\1\e\9\y\q\l\6\g\9\v\b\8\y\3\4\s\l\0\8\i\y\a\o\k\q\2\2\6\j\z\h\u\7\z\x\x\6\i\k\7\8\3\0\9\t\8\a\n\7\6\s\3\n\k\i\f\i\9\i\w\g\3\q\0\4\d\z\5\c\3\f\k\k\e\h\d\7\f\d\f\9\h\p\1\f\u\q\3\2\o\r\t\2\v\c\p\n\9\r\p\m\w\e\x\a\4\i\h\1\n\2\8\r\5\g\9\z\0\7\k\d\k\h\a\1\z\m\h\w\h\c\y\u\5\7\0\h\y\k\z\w\2\z\j\w\3\x\r\b\s\j\k\s\g\s\t\w\k\r\y\8\0\n\5\1\t\d\c\l\1\4\z\o\2\e\w\c\0\6\o\6\t\7\n\j\3\5\f\4\f\k\g\g\2\q\n\j\n\f\u\u\0\2\o\x\t\1\q\0\k\8\g\e\0\r\a\0\l\w\k\f\7\7\4\v\4\p\b\s\s\k\z\k\n\f\a\e\s\s\u\m\2\x\0\4\z\q\i\7\a\r\n\b\q\o\i\3\4\r\w\e\a\f\c\3\z\r\9\b\z\6\q\8\3\d\h\i\5\n\e\1\2\i\n\3\2\6\f\6\1\w\z\j\x\r\f\i\8\o\q\7\j\l\d\6\t\c\r\7\g\n\g\2\z\t\l\9\1\m\b\a\q\w\n\k\k\r\g\q\u\i\w\s\z\x\y\w\g\7\u\5\u\m\e\j\c\m\8\i\f\r\n\z\e\z\u\d\q\r\a\k\y\9\s\h\s\o\n\g\j\c\k\j\m\a\8\d\n\i\g\v\8\6\0\v\r\w\a\s\t\1\e\w\g\s\9\5\m\v\p\w\e\h\o\k\o\q\2\q\q\m\u\d\n\r\0\q\s\d\a\c\9\8\z\o\x\t\w\n\i\8\0\w\1\o\v\0\q\g\t\o\r\i\y\y\3\e\z\g\0\l\z\b\j\p\l\0\w\1\n\w\w\f\f\3\k\c\b\v\9\2\z\2\h\i\n\h\l\a\t\1\b\e\0\w\n\t\i\l\h\s\p\s\v\1\h\8\j\d\x\a\u\j\6\x\9\b\g\y\a\p\q\q\x\2\4\c\s\u\i\m\2\n\5\z\3\d\g\h\g\n\i\4\k\9\d\6\x\2\n\k\2\6\g\5\z\z\l\y\x\s\u\3\p\o\w\4\s\5\x\t\i\p\8\2\9\8\w\z\7\4\7\m\2\m\8\0\4\4\7\l\b\s\k\j\e\c\7\i\m\3\l\f\b\9\9\l\j\x\n\3\6\3\p\f\c\l\k\t\1\b\6\i\n\q\s\u\m\n\x\4\i\b\3\m\p\y\m\w\z\6\p\c\1\7\i\p\6\3\d\h\y\f\p\o\c\i\f\1\j\l\6\q\q\e\i\q\k\u\t\l\b\z\n\v\5\u\0\p\b\0\1\l\r\d\s\c\9\r\o\0\c\3\f\y\w\l\7\d\v\t\x\q\z\l\y\o\o\4\9\x\0\t\z\2\i\n\n\9\4\h\c\m\1\f\k\z\c\6\b\n\4\s\k\r\v\5\l\7\c\b\5\t\1\w\u\z\d\k\y\x\x\j\m\5\9\m\g\9\3\9\f\r\i\h\b\w\s\1\d\v\z\y\1\7\1\v\h\9\v\6\y\l\i\d\n\8\u\j\3\s\9\i\k\g\y\1\9\x\8\p\8\r\z\u\3\r\q\k\7\p\h\8\a\l\w\h\h\h\e\u\1\d\6\i\2\s\a\u\9\t\h\b\a\3\x\h\m\8\k\0\e\9\y\g\r\0\k\f\d\a\y\i\9\m\s\y\2\h\p\c\i\5\e\d\e\i\w\y\p\o\v\o\j\f\c\r\u\p\6\b\c\k\b\9\m\2\h\m\b\8\o\9\7\r\h\5\0\j\e\e\r\x\v\i\a\y\n\h\q\q\w\j\4\a\l\0\2\q\s\k\3\5\5\5\b\5\e\b\p\z\9\d\5\f\v\6\o\3\r\k\s\a\t\j\j\0\n\t\e\8\i\t\p\h\p\2\i\i\4\k\n\a\r\9\g\0\7\a\b\h\z\k\4\d\a\v\f\v\r\z\i\2\n\h\j\b\h\u\7\s\a\l\g\y\p\x\w\x\0\a\w\m\b\n\m\m\g\5\x\v\6\d\9\q\2\i\l\g\z\5\l\0\1\h\0\j\c\d\t\6\g\n\s\4\s\h\0\7\j\h\m\y\7\x\4\d\b\i\l\o\o\k\d\x\r\m\h\z\8\m\k\m\d\8\u\n\z\d\z\4\d\9\q\h\k\l\r\s\0\w\o\0\8\m\8\q\u\3\x\w\u\3\g\1\v\v\i\8\5\h\7\n\r\m\m\i\s\c\r\k\6\z\t\5\v\x\n\z\f\4\3\a\v\i\b\g\h\q\2\f\h\k\y\1\x\2\0\q\q\5\4\f\f\k\g\n\a\l\t\2\d\g\5\s\h\f\9\v\y\f\f\z\6\p\2\0\n\r\g\s\e\i\v\v\2\l\i\9\n\d\t\l\z\z\6\i\u\z\i\j\i\o\8\9\d\y\1\t\v\s\l\a\2\o\1\7\f\a\p\3\7\a\z\t\q\x\q\1\2\v\r\x\6\n\p\g\e\6\q\r\m\k\n\1\w\8\1\f\s\8\q\j\x\z\4\9\s\5\k\2\9\b\2\4\8\y\s\v\r\q\v\w\4\v\r\0\t\e\4\7\3\r\p\m\r\9\d\5\d\q\u\a\x\i\3\7\c\1\8\8\m\y\f\7\7\h\1\4\r\5\n\9\z\o\4\4\p\r\8\k\8\3\g\j\7\i\i\l\o\6\i\3\g\y\7\l\k\z\g\9\b\w\k\3\z\w\w\m\5\k\8\a\z\9\5\b\m\c\c\1\b\d\r\k\5\j\6\n\b\m\s\q\c\t\h\h\h\n\o\l\h\s\a\h\q\s\1\2\d\w\6\o\5\o\d\9\p\y\r\1\h\x\d\z\2\t\q\d\o\d\o\9\0\z\g\c\8\v\a\b\b\l\g\i\k\1\w\8\w\h\p\e\p\5\k\y\p\q\8\p\n\v\t\s\k\d\q\c\p\b\p\k\z\d\v\x\u\e\8\r\j\r\y\t\i\v\9\x\2\i\o\c\a\x\l\2\q\p\m\b\8\d\k\r\6\y\i\l\s\6\5\z\s\6\j\k\a\c\y\t\j\m\7\q\c\7\5\r\r\0\s\9\9\1\g\z\y\u\y\v\q\7\t\q\u\4\x\8\r\m\k\z\l\k\d\q\w\q\y\7\l\r\6\5\k\4\c\w\a\q\a\z\h\e\9\t\m\t\y\z\1\l\1\0\8\f\a\n\i\v\2\a\s\4\r\a\l\w\m\f\c\3\l\j\o\a\w\8\o\v\m\1\j\a\i\c\q\0\h\n\7\f\4\t\s\g\f\a\t\u\p\n\7\a\3\r\m\g\j\v\y\m\a\2\v\0\o\9\s\k\b\o\6\7\p\3\8\z\0\3\i\x\0\5\a\2\y\q\p\r\q\6\f\r\0\v\l\h\o\j\1\a\0\k\t\3\8\8\9\p\z\9\v\t\s\b\6\5\y\p\f\p\7\p\g\a\8\3\a\9\m\2\h\b\5\w\h\8\9\7\a\q\c\k\t\s\2\w\m\2\8\4\u\g\b\b\y\s\0\a\y\w\x\k\x\u\q\e\b\i\o\h\w\1\n\l\5\7\6\d\j\y\r\p\p\h\e\7\y\n\c\0\8\g\p\3\9\2\m\z\4\a\7\5\z\g\y\b\f\g\0\0\z\9\b\2\w\m\g\y\g\w\r\m\e\1\e\v\m\p\b\h\o\p\e\7\2\h\s\i\7\0\y\l\l\d\2\m\u\k\x\n\8\5\i\i\q\o\c\o\e\j\z\t\4\9\s\m\l\q\3\s\h\i\a\5\d\t\d\t\n\s\e\b\0\j\7\y\l\m\z\d\e\2\9\o\q\z\r\c\4\6\d\l\f\e\v\q\s\k\5\p\6\6\o\o\6\m\2\q\n\8\n\z\2\h\a\d\0\t\i\8\m\h\7\y\5\q\t\3\m\m\n\s\o\5\6\l\p\9\t\4\s\7\j\j\m\k\t\k\r\i\r\f\t\t\1\f\3\8\q\s\7\d\z\6\6\b\i\h\b\t\h\l\u\y\t\m\8\9\q\z\y\7\r\7\2\d\m\n\0\d\x\1\l\c\1\h\7\9\k\a\n\l\d\z\b\r\q\2\s\q\i\6\z\9\h\x\i\p\3\m\s\a\h\u\o\q\l\3\e\r\3\k\m\9\5\x\6\p\d\h\x\5\a\f\1\p\o\5\w\3\l\4\h\i\f\c\5\q\8\1\3\f\s\z\9\1\8\k\7\4\u\y\z\j\d\8\4\2\w\3\0\9\1\w\p\m\l\0\x\e\3\4\i\u\d\g\f\q\w\i\6\w\3\n\x\9\m\y\1\j\c\p\v\g\n\p\k\0\c\h\b\0\i\x\r\u\c\u\3\s\1\3\w\o\7\d\p\6\m\r\a\l\l\c\c\d\r\p\8\r\a\4\q\w\x\a\g\j\5\v\3\t\m\8\j\r\b\8\x\z\v\y\l\z\z\8\d\l\i\x\w\u\n\k\y\2\m\h\7\z\o\v\b\z\8\0\8\h\7\p\a\p\f\k\d\7\r\b\3\m\x\m\m\p\b\f\n\h\p\8\0\y\x\y\i\0\1\9\d\z\a\r\t\s\z\a\d\q\2\d\m\d\a\1\k\o\v\e\q\o\w\8\0\u\3\r\u\n\k\k\0\k\m\o\8\c\l\u\d\s\t\u\e\r\e\3\a\q\b\a\5\g\h\w\7\9\5\d\k\p\b\6\8\e\d\d\4\7\7\u\6\s\g\u\v\8\q\p\y\b\w\3\t\w\2\7\l\m\q\t\2\p\o\3\s\0\s\l\l\2\k\a\b\t\2\j\n\p\a\m\m\7\y\x\a\6\a\1\c\t\i\z\k\w\9\q\0\r\p\0\m\7\e\m\5\e\y\z\f\1\8\y\q\e\d\2\5\v\c\p\y\i\e\w\h\n\n\x\i\0\n\u\9\n\2\2\t\h\w\l\c\j\l\q\f\g\w\2\k\d\a\q\b\y\g\o\3\p\g\3\k\p\b\l\z\f\i\b\z\d\7\o\5\o\l\u\m\t\b\p\a\e\x\g\a\2\6\4\k\9\t\t\q\x\x\u\n\1\9\c\w\9\q\w\3\a\t\x\9\1\i\z\7\d\r\a\w\g\k\k\j\n\1\h\n\b\j\f\r\y\v\b\3\8\6\j\a\i\z\l\t\q\0\c\8\l\j\k\3\b\4\r\k\9\k\1\u\o\w\1\q\6\6\9\w\q\1\r\k\j\q\z\0\8\1\7\o\j\o\y\n\u\0\3\n\w\d\f\5\g\m\t\r\n\h\t\d\l\u\9\j\k\y\j\g\6\0\r\w\m\q\6\y\6\9\p\6\r\d\1\e\m\s\l\b\1\7\i\m\p\c\0\q\l\e\g\4\k\l\4\t\h\q\z\c\y\d\7\o\l\a\0\m\j\u\c\9\x\w\w\x\y\r\i\c\w\d\p\5\6\7\r\x\u\i\n\y\d\2\1\j\i\r\a\9\3\y\r\r\w\8\t\r\n\b\2\c\w\s\6\z\v\1\j\y\t\n\e\9\w\a\c\o\p\2\q\e\k\t\l\0\n\h\g\3\j\j\w\4\d\x\g\r\v\c\u\y\x\7\m\a\d\h\2\a\a\b\b\p\6\7\s\d\e\y\o\a\g\m\v\n\q\q\d\1\p\w\h\8\5\w\e\7\0\h\l\q\h\e\k\9\k\3\f\k\p\x\0\0\d\w\t\p\j\u\k\9\g\9\i\b\0\v\u\l\w\h\3\w\f\c\n\j\p\b\8\q\z\1\7\7\9\l\u\l\v\4\v\v\v\c\k\h\n\j\s\j\v\t\f\3\u\m\t\w\g\e\6\g\u\i\8\g\6\e\3\x\8\f\1\g\c\p\s\l\5\0\7\q\k\d\g\j\a\e\6\c\4\w\i\h\k\j\9\m\o\6\g\z\o\8\n\8\p\d\8\i\h\i\a\o\u\f\h\m\4\7\j\u\i\3\3\g\r\9\q\8\1\s\g\5\u\1\a\w\p\7\5\4\y\j\u\j\c\n\v\n\7\5\o\s\6\y\s\4\w\9\r\c\5\n\o\3\9\i\s\m\e\f\w\1\7\b\6\0\z\z\z\v\j\5\b\h\q\x\4\0\y\7\q\5\p\m\e\b\3\b\w\x\j\u\q\p\n\d\8\o\4\j\n\k\1\3\o\9\y\2\z\k\j\y\x\b\1\c\d\y\d\l\l\0\v\d\r\7\u\i\d\a\y\v\5\y\w\s\7\9\u\t\w\5\c\a\v\b\2\2\q\n\g\6\4\9\k\6\b\u\a\c\e\3\h\y\o\q\t\w\z\p\a\n\e\4\1\l\v\c\5\z\9\h\w\g\y\s\5\q\3\i\1\y\4\x\6\8\n\u\j\q\p\5\g\z\t\h\s\o\i\x\q\d\c\z\8\k\7\2\j\9\l\8\u\7\s\0\k\i\2\3\6\5\a\a\c\q\j\f\9\o\6\6\q\w\e\u\g\8\h\p\m\r\k\9\i\s\r\3\x\m\h\2\k\9\y\6\r\n\a\s\h\p\4\4\x\q\n\d\b\c\l\l\c\e\j\v\n\h\2\a\8\4\0\i\o\v\1\m\b\3\4\a\l\b\m\e\j\9\f\n\0\m\m\t\e\x\h\m\8\0\5\6\z\d\7\p\i\k\q\r\1\g\i\z\m\c\p\z\a\2\5\3\w\4\b\i\2\e\r\a\3\6\u\2\1\q\q\t\f\4\l\c\f\0\8\9\c\0\p\h\z\y\7\p\e\0\8\r\6\f\y\n\g\r\w\c\w\7\1\p\x\8\1\p\p\g\6\8\w\9\3\j\m\9\h\k\4\9\2\c\y\1\8\q\7\f\4\a\1\b\3\8\o\d\2\s\d\d\c\e\4\o\v\h\m\s\x\c\a\y\8\c\t\7\p\o\z\m\o\r\q\n\f\d\f\b\3\z\m\j\t\x\i\p\k\5\p\p\a\e\e\z\f\g\x\h\m\d\u\k\4\l\5\m\1\t\u\x\g\m\n\y\9\z\t\z\s\r\z\7\c\9\w\z\4\y\p\f\y\8\u\e\c\x\4\u\k\g\v\t\i\1\s\c\s\8\y\q\a\n\x\v\n\m\0\7\2\h\y\6\k\t\8\m\3\7\7\0\p\5\p\6\0\9\y\y\1\k\x\j\s\7\y\3\g\8\v\e\3\z\o\5\y\9\m\d\2\e\h\x\k\v\g\y\i\a\g\d\f\6\p\h\8\5\f\e\s\8\a\u\q\i\0\7\e\d\0\4\r\8\i\5\g\4\q\w\g\a\z\s\p\z\y\3\3\x\r\z\b\p\e\d\3\f\t\g\r\5\2\i\1\3\g\v\h\o\l\u\6\c\j\2\q\h\5\b\b\o\n\x\g\o\8\p\v\1\6\8\j\p\q\5\i\r\7\4\r\h\h\v\a\w\q\r\h\x\3\9\z\k\j\m\k\h\w\d\e\p\u\2\0\o\a\s\j\w\u\5\t\4\y\w\z\m\h\1\a\v\w\k\s\l\c\9\b\d\4\x\w\g\a\7\t\9\y\8\x\t\f\g\5\u\u\8\z\i\o\e\f\h\o\v\7\y\1\g\d\3\4\b\p\1\h\c\f\w\s\6\v\a\9\5\2\p\5\3\g\a\v\d\0\7\7\j\1\w\w\a\l\3\4\p\7\2\t\8\4\z\0\1\3\a\5\5\q\x\1\j\b\9\v\9\8\3\y\5\z\3\y\q\u\9\m\q\w\a\6\2\5\i\5\8\p\6\2\x\y\5\p\n\z\c\r\6\8\8\3\8\e\5\s\r\n\o\d\1\n\2\0\p\d\z\y\i\p\5\8\n\k\2\k\4\d\z\x\w\f\q\g\g\y\h\7\z\1\5\i\a\p\a\w\d\6\h\u\z\9\k\r\1\9\1\e\5\l\7\2\6\8\r\q\b\4\k\5\1\z\w\7\y\l\j\c\7\0\c\o\z\e\d\f\7\t\6\c\d\q\7\d\z\x\3\y\g\2\w\e\2\4\i\z\i\5\i\v\v\3\1\y\s\n\w\6\c\6\v\k\q\x\k\h\n\3\o\x\n\z\w\7\a\h\t\o\3\o\r\v\5\1\i\n\y\m\9\q\c\k\k\z\o\p\v\j\d\z\a\o\7\w\m\p\s\w\1\z\d\i\d\0\6\b\s\b\i\a\b\2\3\5\f\b\p\l\6\q\j ]] 00:07:04.459 00:07:04.459 real 0m1.440s 00:07:04.459 user 0m1.026s 00:07:04.459 sys 0m0.603s 00:07:04.459 08:19:56 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:04.459 ************************************ 00:07:04.459 END TEST dd_rw_offset 00:07:04.459 ************************************ 00:07:04.459 08:19:56 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:04.459 08:19:56 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 00:07:04.459 08:19:56 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:07:04.459 08:19:56 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:07:04.459 08:19:56 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:04.459 08:19:56 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:04.459 08:19:56 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:07:04.459 08:19:56 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:04.459 08:19:56 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:07:04.459 08:19:56 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:04.459 08:19:56 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:07:04.459 08:19:56 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:04.459 08:19:56 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:04.459 { 00:07:04.459 "subsystems": [ 00:07:04.459 { 00:07:04.459 "subsystem": "bdev", 00:07:04.459 "config": [ 00:07:04.459 { 00:07:04.459 "params": { 00:07:04.459 "trtype": "pcie", 00:07:04.459 "traddr": "0000:00:10.0", 00:07:04.459 "name": "Nvme0" 00:07:04.459 }, 00:07:04.459 "method": "bdev_nvme_attach_controller" 00:07:04.459 }, 00:07:04.459 { 00:07:04.459 "method": "bdev_wait_for_examine" 00:07:04.459 } 00:07:04.459 ] 00:07:04.459 } 00:07:04.459 ] 00:07:04.459 } 00:07:04.459 [2024-07-15 08:19:56.476868] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:04.459 [2024-07-15 08:19:56.476988] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63048 ] 00:07:04.459 [2024-07-15 08:19:56.621247] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.718 [2024-07-15 08:19:56.739482] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.718 [2024-07-15 08:19:56.793500] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:04.977  Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:04.977 00:07:04.977 08:19:57 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:04.977 ************************************ 00:07:04.977 END TEST spdk_dd_basic_rw 00:07:04.977 ************************************ 00:07:04.977 00:07:04.977 real 0m19.699s 00:07:04.977 user 0m14.536s 00:07:04.977 sys 0m6.732s 00:07:04.977 08:19:57 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:04.977 08:19:57 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:05.235 08:19:57 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:07:05.235 08:19:57 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:07:05.235 08:19:57 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:05.235 08:19:57 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.235 08:19:57 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:05.235 ************************************ 00:07:05.235 START TEST spdk_dd_posix 00:07:05.235 ************************************ 00:07:05.235 08:19:57 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:07:05.235 * Looking for test storage... 00:07:05.235 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:05.235 08:19:57 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:05.235 08:19:57 spdk_dd.spdk_dd_posix -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:05.235 08:19:57 spdk_dd.spdk_dd_posix -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:05.235 08:19:57 spdk_dd.spdk_dd_posix -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:05.235 08:19:57 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.235 08:19:57 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.235 08:19:57 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.235 08:19:57 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:07:05.235 08:19:57 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.235 08:19:57 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:07:05.235 08:19:57 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:07:05.235 08:19:57 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:07:05.235 08:19:57 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:07:05.235 08:19:57 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:05.235 08:19:57 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:05.235 08:19:57 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:07:05.235 08:19:57 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:07:05.235 * First test run, liburing in use 00:07:05.235 08:19:57 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:07:05.235 08:19:57 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:05.235 08:19:57 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.235 08:19:57 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:05.235 ************************************ 00:07:05.235 START TEST dd_flag_append 00:07:05.235 ************************************ 00:07:05.236 08:19:57 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1123 -- # append 00:07:05.236 08:19:57 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:07:05.236 08:19:57 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:07:05.236 08:19:57 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:07:05.236 08:19:57 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:07:05.236 08:19:57 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:07:05.236 08:19:57 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=0y5onpsnf0muat86t87jpmd5w3ubl8tt 00:07:05.236 08:19:57 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:07:05.236 08:19:57 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:07:05.236 08:19:57 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:07:05.236 08:19:57 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=8273kludkva61wuvsg4rifkx9bvjfz6t 00:07:05.236 08:19:57 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s 0y5onpsnf0muat86t87jpmd5w3ubl8tt 00:07:05.236 08:19:57 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s 8273kludkva61wuvsg4rifkx9bvjfz6t 00:07:05.236 08:19:57 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:07:05.236 [2024-07-15 08:19:57.353064] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:05.236 [2024-07-15 08:19:57.353901] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63107 ] 00:07:05.494 [2024-07-15 08:19:57.493683] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.494 [2024-07-15 08:19:57.610407] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.494 [2024-07-15 08:19:57.663844] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:05.751  Copying: 32/32 [B] (average 31 kBps) 00:07:05.751 00:07:05.751 08:19:57 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ 8273kludkva61wuvsg4rifkx9bvjfz6t0y5onpsnf0muat86t87jpmd5w3ubl8tt == \8\2\7\3\k\l\u\d\k\v\a\6\1\w\u\v\s\g\4\r\i\f\k\x\9\b\v\j\f\z\6\t\0\y\5\o\n\p\s\n\f\0\m\u\a\t\8\6\t\8\7\j\p\m\d\5\w\3\u\b\l\8\t\t ]] 00:07:05.751 00:07:05.751 real 0m0.628s 00:07:05.751 user 0m0.368s 00:07:05.751 sys 0m0.274s 00:07:05.751 08:19:57 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:05.751 08:19:57 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:07:05.751 ************************************ 00:07:05.751 END TEST dd_flag_append 00:07:05.751 ************************************ 00:07:06.010 08:19:57 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:07:06.010 08:19:57 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:07:06.010 08:19:57 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:06.010 08:19:57 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.010 08:19:57 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:06.010 ************************************ 00:07:06.010 START TEST dd_flag_directory 00:07:06.010 ************************************ 00:07:06.010 08:19:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1123 -- # directory 00:07:06.010 08:19:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:06.010 08:19:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@648 -- # local es=0 00:07:06.010 08:19:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:06.010 08:19:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.010 08:19:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:06.010 08:19:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.010 08:19:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:06.010 08:19:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.010 08:19:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:06.010 08:19:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.010 08:19:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:06.010 08:19:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:06.010 [2024-07-15 08:19:58.017559] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:06.010 [2024-07-15 08:19:58.017645] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63141 ] 00:07:06.010 [2024-07-15 08:19:58.153165] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.268 [2024-07-15 08:19:58.269452] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.268 [2024-07-15 08:19:58.322399] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:06.268 [2024-07-15 08:19:58.356959] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:06.268 [2024-07-15 08:19:58.357024] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:06.268 [2024-07-15 08:19:58.357041] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:06.526 [2024-07-15 08:19:58.470080] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:06.526 08:19:58 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # es=236 00:07:06.526 08:19:58 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:06.526 08:19:58 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@660 -- # es=108 00:07:06.526 08:19:58 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # case "$es" in 00:07:06.526 08:19:58 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@668 -- # es=1 00:07:06.526 08:19:58 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:06.526 08:19:58 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:06.526 08:19:58 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@648 -- # local es=0 00:07:06.526 08:19:58 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:06.526 08:19:58 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.526 08:19:58 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:06.526 08:19:58 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.526 08:19:58 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:06.526 08:19:58 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.526 08:19:58 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:06.526 08:19:58 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.526 08:19:58 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:06.526 08:19:58 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:06.526 [2024-07-15 08:19:58.632669] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:06.526 [2024-07-15 08:19:58.633040] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63150 ] 00:07:06.785 [2024-07-15 08:19:58.772879] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.785 [2024-07-15 08:19:58.890114] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.785 [2024-07-15 08:19:58.943212] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:07.044 [2024-07-15 08:19:58.977497] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:07.044 [2024-07-15 08:19:58.977559] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:07.044 [2024-07-15 08:19:58.977578] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:07.044 [2024-07-15 08:19:59.090572] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:07.044 08:19:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # es=236 00:07:07.044 08:19:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:07.044 08:19:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@660 -- # es=108 00:07:07.044 08:19:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # case "$es" in 00:07:07.044 08:19:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@668 -- # es=1 00:07:07.044 08:19:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:07.044 00:07:07.044 real 0m1.227s 00:07:07.044 user 0m0.716s 00:07:07.044 sys 0m0.294s 00:07:07.044 08:19:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:07.044 08:19:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:07:07.044 ************************************ 00:07:07.044 END TEST dd_flag_directory 00:07:07.044 ************************************ 00:07:07.302 08:19:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:07:07.302 08:19:59 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:07:07.302 08:19:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:07.302 08:19:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.302 08:19:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:07.302 ************************************ 00:07:07.302 START TEST dd_flag_nofollow 00:07:07.302 ************************************ 00:07:07.302 08:19:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1123 -- # nofollow 00:07:07.302 08:19:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:07.302 08:19:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:07.302 08:19:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:07.302 08:19:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:07.302 08:19:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:07.302 08:19:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@648 -- # local es=0 00:07:07.302 08:19:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:07.302 08:19:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.302 08:19:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:07.303 08:19:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.303 08:19:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:07.303 08:19:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.303 08:19:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:07.303 08:19:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.303 08:19:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:07.303 08:19:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:07.303 [2024-07-15 08:19:59.309342] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:07.303 [2024-07-15 08:19:59.309456] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63179 ] 00:07:07.303 [2024-07-15 08:19:59.449091] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.561 [2024-07-15 08:19:59.566094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.561 [2024-07-15 08:19:59.619209] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:07.561 [2024-07-15 08:19:59.654125] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:07.562 [2024-07-15 08:19:59.654181] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:07.562 [2024-07-15 08:19:59.654208] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:07.820 [2024-07-15 08:19:59.768655] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:07.820 08:19:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # es=216 00:07:07.820 08:19:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:07.820 08:19:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@660 -- # es=88 00:07:07.820 08:19:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # case "$es" in 00:07:07.820 08:19:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@668 -- # es=1 00:07:07.820 08:19:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:07.820 08:19:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:07.820 08:19:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@648 -- # local es=0 00:07:07.820 08:19:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:07.820 08:19:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.820 08:19:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:07.820 08:19:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.820 08:19:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:07.820 08:19:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.820 08:19:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:07.820 08:19:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.820 08:19:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:07.820 08:19:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:07.820 [2024-07-15 08:19:59.917750] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:07.820 [2024-07-15 08:19:59.917842] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63194 ] 00:07:08.080 [2024-07-15 08:20:00.050806] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.080 [2024-07-15 08:20:00.170935] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.080 [2024-07-15 08:20:00.223975] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:08.339 [2024-07-15 08:20:00.258554] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:08.339 [2024-07-15 08:20:00.258621] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:08.339 [2024-07-15 08:20:00.258641] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:08.339 [2024-07-15 08:20:00.371590] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:08.339 08:20:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # es=216 00:07:08.339 08:20:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:08.339 08:20:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@660 -- # es=88 00:07:08.339 08:20:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # case "$es" in 00:07:08.339 08:20:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@668 -- # es=1 00:07:08.339 08:20:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:08.339 08:20:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:07:08.339 08:20:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:07:08.339 08:20:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:07:08.339 08:20:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:08.598 [2024-07-15 08:20:00.543175] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:08.598 [2024-07-15 08:20:00.543438] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63196 ] 00:07:08.598 [2024-07-15 08:20:00.684017] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.856 [2024-07-15 08:20:00.801483] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.856 [2024-07-15 08:20:00.854509] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:09.116  Copying: 512/512 [B] (average 500 kBps) 00:07:09.116 00:07:09.116 08:20:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ tosw5slj730j2tfjbf8d03adxohrufrr8969uxk6fmv2dqj7ke001i5bsuznx423ek5lckdfe4j4xufyxpueq2s10ioe6vjqbalphm9l511nhb53eww7be4eii2g9uozeg49qondmqonvniy3n3zymdo71b7zxv5qe5xqpcw7on3zivzjjzwsr9dgwkipmtcoe4hipnhug17dh6vv13i3vfafucvud11o0pwtq7amnwipx0nvqb2twlhyrb1l2gvjnfi2kk9ef0qtfntouppq6ecvl5473pg0fh3nm7sz0varr9gga6yryxxyw2006cybmk94lxqhyls4et1r9cwlgfbtbzfef9l69jgk09h3a0lsp73dc6tmdey80en4t6avf737d4kw4bsuc58c8wymnlp42xi62bauiwgdjmy1qt8sf4qypv60mllbyr0wnbejh7dgeuebgq83iax94378oh4p5djaqp1deob7fqrrv6ltoq9c3ol7tf9pvjypunl == \t\o\s\w\5\s\l\j\7\3\0\j\2\t\f\j\b\f\8\d\0\3\a\d\x\o\h\r\u\f\r\r\8\9\6\9\u\x\k\6\f\m\v\2\d\q\j\7\k\e\0\0\1\i\5\b\s\u\z\n\x\4\2\3\e\k\5\l\c\k\d\f\e\4\j\4\x\u\f\y\x\p\u\e\q\2\s\1\0\i\o\e\6\v\j\q\b\a\l\p\h\m\9\l\5\1\1\n\h\b\5\3\e\w\w\7\b\e\4\e\i\i\2\g\9\u\o\z\e\g\4\9\q\o\n\d\m\q\o\n\v\n\i\y\3\n\3\z\y\m\d\o\7\1\b\7\z\x\v\5\q\e\5\x\q\p\c\w\7\o\n\3\z\i\v\z\j\j\z\w\s\r\9\d\g\w\k\i\p\m\t\c\o\e\4\h\i\p\n\h\u\g\1\7\d\h\6\v\v\1\3\i\3\v\f\a\f\u\c\v\u\d\1\1\o\0\p\w\t\q\7\a\m\n\w\i\p\x\0\n\v\q\b\2\t\w\l\h\y\r\b\1\l\2\g\v\j\n\f\i\2\k\k\9\e\f\0\q\t\f\n\t\o\u\p\p\q\6\e\c\v\l\5\4\7\3\p\g\0\f\h\3\n\m\7\s\z\0\v\a\r\r\9\g\g\a\6\y\r\y\x\x\y\w\2\0\0\6\c\y\b\m\k\9\4\l\x\q\h\y\l\s\4\e\t\1\r\9\c\w\l\g\f\b\t\b\z\f\e\f\9\l\6\9\j\g\k\0\9\h\3\a\0\l\s\p\7\3\d\c\6\t\m\d\e\y\8\0\e\n\4\t\6\a\v\f\7\3\7\d\4\k\w\4\b\s\u\c\5\8\c\8\w\y\m\n\l\p\4\2\x\i\6\2\b\a\u\i\w\g\d\j\m\y\1\q\t\8\s\f\4\q\y\p\v\6\0\m\l\l\b\y\r\0\w\n\b\e\j\h\7\d\g\e\u\e\b\g\q\8\3\i\a\x\9\4\3\7\8\o\h\4\p\5\d\j\a\q\p\1\d\e\o\b\7\f\q\r\r\v\6\l\t\o\q\9\c\3\o\l\7\t\f\9\p\v\j\y\p\u\n\l ]] 00:07:09.116 00:07:09.116 real 0m1.857s 00:07:09.116 user 0m1.079s 00:07:09.116 sys 0m0.579s 00:07:09.116 08:20:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:09.116 ************************************ 00:07:09.116 END TEST dd_flag_nofollow 00:07:09.116 ************************************ 00:07:09.116 08:20:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:07:09.116 08:20:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:07:09.116 08:20:01 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:07:09.116 08:20:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:09.116 08:20:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:09.116 08:20:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:09.116 ************************************ 00:07:09.116 START TEST dd_flag_noatime 00:07:09.116 ************************************ 00:07:09.116 08:20:01 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1123 -- # noatime 00:07:09.116 08:20:01 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:07:09.116 08:20:01 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:07:09.116 08:20:01 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:07:09.116 08:20:01 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:07:09.116 08:20:01 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:07:09.116 08:20:01 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:09.116 08:20:01 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1721031600 00:07:09.116 08:20:01 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:09.116 08:20:01 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1721031601 00:07:09.116 08:20:01 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:07:10.056 08:20:02 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:10.314 [2024-07-15 08:20:02.238450] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:10.314 [2024-07-15 08:20:02.238831] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63244 ] 00:07:10.314 [2024-07-15 08:20:02.377330] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.572 [2024-07-15 08:20:02.494674] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.573 [2024-07-15 08:20:02.547206] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:10.831  Copying: 512/512 [B] (average 500 kBps) 00:07:10.831 00:07:10.831 08:20:02 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:10.831 08:20:02 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1721031600 )) 00:07:10.831 08:20:02 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:10.831 08:20:02 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1721031601 )) 00:07:10.831 08:20:02 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:10.831 [2024-07-15 08:20:02.847804] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:10.831 [2024-07-15 08:20:02.847898] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63263 ] 00:07:10.831 [2024-07-15 08:20:02.979646] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.090 [2024-07-15 08:20:03.096848] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.090 [2024-07-15 08:20:03.149007] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:11.347  Copying: 512/512 [B] (average 500 kBps) 00:07:11.347 00:07:11.347 08:20:03 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:11.347 ************************************ 00:07:11.347 END TEST dd_flag_noatime 00:07:11.347 ************************************ 00:07:11.347 08:20:03 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1721031603 )) 00:07:11.347 00:07:11.347 real 0m2.248s 00:07:11.347 user 0m0.733s 00:07:11.347 sys 0m0.542s 00:07:11.347 08:20:03 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:11.347 08:20:03 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:07:11.347 08:20:03 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:07:11.347 08:20:03 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:07:11.347 08:20:03 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:11.347 08:20:03 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.347 08:20:03 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:11.347 ************************************ 00:07:11.347 START TEST dd_flags_misc 00:07:11.347 ************************************ 00:07:11.347 08:20:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1123 -- # io 00:07:11.347 08:20:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:07:11.347 08:20:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:07:11.347 08:20:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:07:11.347 08:20:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:11.347 08:20:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:07:11.347 08:20:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:07:11.347 08:20:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:11.347 08:20:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:11.347 08:20:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:11.347 [2024-07-15 08:20:03.511419] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:11.347 [2024-07-15 08:20:03.511513] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63286 ] 00:07:11.604 [2024-07-15 08:20:03.642934] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.604 [2024-07-15 08:20:03.761412] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.863 [2024-07-15 08:20:03.813653] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:12.122  Copying: 512/512 [B] (average 500 kBps) 00:07:12.122 00:07:12.122 08:20:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ akfqh4qftv404q2wgyhsk18zlrduzgfwvrkvi2jf2t2kyk5jzr4iq0clyly28o048mqyqux4pdaw0amotcpzjhpmzn9dtexfdikc3z2ewrh57csao5unsjcbjk7m9o991h05eo7zc3x5quf0uyk7zqtrobf4ss845npxc34e33gvcepigh4ovx3qmpubqmv4en5inzyd038fv263n063cfqdbvy8u00a4y8gxofewc5yyyly85z8xfthq6vkcb762akkl9n56ievxqrlrxaeo05br9nstchmi8n83ltt9ox5bueocg8kxoo6772j0u26dhlyshn0ezt84w0ww528dya539xbyq0v6rukay1oe6vlvpftad8fepzroasqx58win25bqocx41igy9lntd9tovnnt777bcnp2utk261snxd0istu80k463izic1n9q1egjhsl1pipfzy98ri9ntzhbf7yqfucweadycymezopbz12h2g7xba9k4kwe49p8x == \a\k\f\q\h\4\q\f\t\v\4\0\4\q\2\w\g\y\h\s\k\1\8\z\l\r\d\u\z\g\f\w\v\r\k\v\i\2\j\f\2\t\2\k\y\k\5\j\z\r\4\i\q\0\c\l\y\l\y\2\8\o\0\4\8\m\q\y\q\u\x\4\p\d\a\w\0\a\m\o\t\c\p\z\j\h\p\m\z\n\9\d\t\e\x\f\d\i\k\c\3\z\2\e\w\r\h\5\7\c\s\a\o\5\u\n\s\j\c\b\j\k\7\m\9\o\9\9\1\h\0\5\e\o\7\z\c\3\x\5\q\u\f\0\u\y\k\7\z\q\t\r\o\b\f\4\s\s\8\4\5\n\p\x\c\3\4\e\3\3\g\v\c\e\p\i\g\h\4\o\v\x\3\q\m\p\u\b\q\m\v\4\e\n\5\i\n\z\y\d\0\3\8\f\v\2\6\3\n\0\6\3\c\f\q\d\b\v\y\8\u\0\0\a\4\y\8\g\x\o\f\e\w\c\5\y\y\y\l\y\8\5\z\8\x\f\t\h\q\6\v\k\c\b\7\6\2\a\k\k\l\9\n\5\6\i\e\v\x\q\r\l\r\x\a\e\o\0\5\b\r\9\n\s\t\c\h\m\i\8\n\8\3\l\t\t\9\o\x\5\b\u\e\o\c\g\8\k\x\o\o\6\7\7\2\j\0\u\2\6\d\h\l\y\s\h\n\0\e\z\t\8\4\w\0\w\w\5\2\8\d\y\a\5\3\9\x\b\y\q\0\v\6\r\u\k\a\y\1\o\e\6\v\l\v\p\f\t\a\d\8\f\e\p\z\r\o\a\s\q\x\5\8\w\i\n\2\5\b\q\o\c\x\4\1\i\g\y\9\l\n\t\d\9\t\o\v\n\n\t\7\7\7\b\c\n\p\2\u\t\k\2\6\1\s\n\x\d\0\i\s\t\u\8\0\k\4\6\3\i\z\i\c\1\n\9\q\1\e\g\j\h\s\l\1\p\i\p\f\z\y\9\8\r\i\9\n\t\z\h\b\f\7\y\q\f\u\c\w\e\a\d\y\c\y\m\e\z\o\p\b\z\1\2\h\2\g\7\x\b\a\9\k\4\k\w\e\4\9\p\8\x ]] 00:07:12.122 08:20:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:12.122 08:20:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:12.122 [2024-07-15 08:20:04.114767] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:12.122 [2024-07-15 08:20:04.114879] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63301 ] 00:07:12.122 [2024-07-15 08:20:04.252658] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.380 [2024-07-15 08:20:04.369411] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.380 [2024-07-15 08:20:04.421134] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:12.638  Copying: 512/512 [B] (average 500 kBps) 00:07:12.638 00:07:12.638 08:20:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ akfqh4qftv404q2wgyhsk18zlrduzgfwvrkvi2jf2t2kyk5jzr4iq0clyly28o048mqyqux4pdaw0amotcpzjhpmzn9dtexfdikc3z2ewrh57csao5unsjcbjk7m9o991h05eo7zc3x5quf0uyk7zqtrobf4ss845npxc34e33gvcepigh4ovx3qmpubqmv4en5inzyd038fv263n063cfqdbvy8u00a4y8gxofewc5yyyly85z8xfthq6vkcb762akkl9n56ievxqrlrxaeo05br9nstchmi8n83ltt9ox5bueocg8kxoo6772j0u26dhlyshn0ezt84w0ww528dya539xbyq0v6rukay1oe6vlvpftad8fepzroasqx58win25bqocx41igy9lntd9tovnnt777bcnp2utk261snxd0istu80k463izic1n9q1egjhsl1pipfzy98ri9ntzhbf7yqfucweadycymezopbz12h2g7xba9k4kwe49p8x == \a\k\f\q\h\4\q\f\t\v\4\0\4\q\2\w\g\y\h\s\k\1\8\z\l\r\d\u\z\g\f\w\v\r\k\v\i\2\j\f\2\t\2\k\y\k\5\j\z\r\4\i\q\0\c\l\y\l\y\2\8\o\0\4\8\m\q\y\q\u\x\4\p\d\a\w\0\a\m\o\t\c\p\z\j\h\p\m\z\n\9\d\t\e\x\f\d\i\k\c\3\z\2\e\w\r\h\5\7\c\s\a\o\5\u\n\s\j\c\b\j\k\7\m\9\o\9\9\1\h\0\5\e\o\7\z\c\3\x\5\q\u\f\0\u\y\k\7\z\q\t\r\o\b\f\4\s\s\8\4\5\n\p\x\c\3\4\e\3\3\g\v\c\e\p\i\g\h\4\o\v\x\3\q\m\p\u\b\q\m\v\4\e\n\5\i\n\z\y\d\0\3\8\f\v\2\6\3\n\0\6\3\c\f\q\d\b\v\y\8\u\0\0\a\4\y\8\g\x\o\f\e\w\c\5\y\y\y\l\y\8\5\z\8\x\f\t\h\q\6\v\k\c\b\7\6\2\a\k\k\l\9\n\5\6\i\e\v\x\q\r\l\r\x\a\e\o\0\5\b\r\9\n\s\t\c\h\m\i\8\n\8\3\l\t\t\9\o\x\5\b\u\e\o\c\g\8\k\x\o\o\6\7\7\2\j\0\u\2\6\d\h\l\y\s\h\n\0\e\z\t\8\4\w\0\w\w\5\2\8\d\y\a\5\3\9\x\b\y\q\0\v\6\r\u\k\a\y\1\o\e\6\v\l\v\p\f\t\a\d\8\f\e\p\z\r\o\a\s\q\x\5\8\w\i\n\2\5\b\q\o\c\x\4\1\i\g\y\9\l\n\t\d\9\t\o\v\n\n\t\7\7\7\b\c\n\p\2\u\t\k\2\6\1\s\n\x\d\0\i\s\t\u\8\0\k\4\6\3\i\z\i\c\1\n\9\q\1\e\g\j\h\s\l\1\p\i\p\f\z\y\9\8\r\i\9\n\t\z\h\b\f\7\y\q\f\u\c\w\e\a\d\y\c\y\m\e\z\o\p\b\z\1\2\h\2\g\7\x\b\a\9\k\4\k\w\e\4\9\p\8\x ]] 00:07:12.638 08:20:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:12.638 08:20:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:12.638 [2024-07-15 08:20:04.726417] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:12.638 [2024-07-15 08:20:04.726528] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63305 ] 00:07:12.895 [2024-07-15 08:20:04.860372] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.895 [2024-07-15 08:20:04.976948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.895 [2024-07-15 08:20:05.028969] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:13.153  Copying: 512/512 [B] (average 166 kBps) 00:07:13.153 00:07:13.154 08:20:05 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ akfqh4qftv404q2wgyhsk18zlrduzgfwvrkvi2jf2t2kyk5jzr4iq0clyly28o048mqyqux4pdaw0amotcpzjhpmzn9dtexfdikc3z2ewrh57csao5unsjcbjk7m9o991h05eo7zc3x5quf0uyk7zqtrobf4ss845npxc34e33gvcepigh4ovx3qmpubqmv4en5inzyd038fv263n063cfqdbvy8u00a4y8gxofewc5yyyly85z8xfthq6vkcb762akkl9n56ievxqrlrxaeo05br9nstchmi8n83ltt9ox5bueocg8kxoo6772j0u26dhlyshn0ezt84w0ww528dya539xbyq0v6rukay1oe6vlvpftad8fepzroasqx58win25bqocx41igy9lntd9tovnnt777bcnp2utk261snxd0istu80k463izic1n9q1egjhsl1pipfzy98ri9ntzhbf7yqfucweadycymezopbz12h2g7xba9k4kwe49p8x == \a\k\f\q\h\4\q\f\t\v\4\0\4\q\2\w\g\y\h\s\k\1\8\z\l\r\d\u\z\g\f\w\v\r\k\v\i\2\j\f\2\t\2\k\y\k\5\j\z\r\4\i\q\0\c\l\y\l\y\2\8\o\0\4\8\m\q\y\q\u\x\4\p\d\a\w\0\a\m\o\t\c\p\z\j\h\p\m\z\n\9\d\t\e\x\f\d\i\k\c\3\z\2\e\w\r\h\5\7\c\s\a\o\5\u\n\s\j\c\b\j\k\7\m\9\o\9\9\1\h\0\5\e\o\7\z\c\3\x\5\q\u\f\0\u\y\k\7\z\q\t\r\o\b\f\4\s\s\8\4\5\n\p\x\c\3\4\e\3\3\g\v\c\e\p\i\g\h\4\o\v\x\3\q\m\p\u\b\q\m\v\4\e\n\5\i\n\z\y\d\0\3\8\f\v\2\6\3\n\0\6\3\c\f\q\d\b\v\y\8\u\0\0\a\4\y\8\g\x\o\f\e\w\c\5\y\y\y\l\y\8\5\z\8\x\f\t\h\q\6\v\k\c\b\7\6\2\a\k\k\l\9\n\5\6\i\e\v\x\q\r\l\r\x\a\e\o\0\5\b\r\9\n\s\t\c\h\m\i\8\n\8\3\l\t\t\9\o\x\5\b\u\e\o\c\g\8\k\x\o\o\6\7\7\2\j\0\u\2\6\d\h\l\y\s\h\n\0\e\z\t\8\4\w\0\w\w\5\2\8\d\y\a\5\3\9\x\b\y\q\0\v\6\r\u\k\a\y\1\o\e\6\v\l\v\p\f\t\a\d\8\f\e\p\z\r\o\a\s\q\x\5\8\w\i\n\2\5\b\q\o\c\x\4\1\i\g\y\9\l\n\t\d\9\t\o\v\n\n\t\7\7\7\b\c\n\p\2\u\t\k\2\6\1\s\n\x\d\0\i\s\t\u\8\0\k\4\6\3\i\z\i\c\1\n\9\q\1\e\g\j\h\s\l\1\p\i\p\f\z\y\9\8\r\i\9\n\t\z\h\b\f\7\y\q\f\u\c\w\e\a\d\y\c\y\m\e\z\o\p\b\z\1\2\h\2\g\7\x\b\a\9\k\4\k\w\e\4\9\p\8\x ]] 00:07:13.154 08:20:05 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:13.154 08:20:05 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:13.411 [2024-07-15 08:20:05.331815] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:13.411 [2024-07-15 08:20:05.331918] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63320 ] 00:07:13.411 [2024-07-15 08:20:05.474862] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.669 [2024-07-15 08:20:05.605406] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.669 [2024-07-15 08:20:05.660652] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:13.928  Copying: 512/512 [B] (average 500 kBps) 00:07:13.928 00:07:13.928 08:20:05 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ akfqh4qftv404q2wgyhsk18zlrduzgfwvrkvi2jf2t2kyk5jzr4iq0clyly28o048mqyqux4pdaw0amotcpzjhpmzn9dtexfdikc3z2ewrh57csao5unsjcbjk7m9o991h05eo7zc3x5quf0uyk7zqtrobf4ss845npxc34e33gvcepigh4ovx3qmpubqmv4en5inzyd038fv263n063cfqdbvy8u00a4y8gxofewc5yyyly85z8xfthq6vkcb762akkl9n56ievxqrlrxaeo05br9nstchmi8n83ltt9ox5bueocg8kxoo6772j0u26dhlyshn0ezt84w0ww528dya539xbyq0v6rukay1oe6vlvpftad8fepzroasqx58win25bqocx41igy9lntd9tovnnt777bcnp2utk261snxd0istu80k463izic1n9q1egjhsl1pipfzy98ri9ntzhbf7yqfucweadycymezopbz12h2g7xba9k4kwe49p8x == \a\k\f\q\h\4\q\f\t\v\4\0\4\q\2\w\g\y\h\s\k\1\8\z\l\r\d\u\z\g\f\w\v\r\k\v\i\2\j\f\2\t\2\k\y\k\5\j\z\r\4\i\q\0\c\l\y\l\y\2\8\o\0\4\8\m\q\y\q\u\x\4\p\d\a\w\0\a\m\o\t\c\p\z\j\h\p\m\z\n\9\d\t\e\x\f\d\i\k\c\3\z\2\e\w\r\h\5\7\c\s\a\o\5\u\n\s\j\c\b\j\k\7\m\9\o\9\9\1\h\0\5\e\o\7\z\c\3\x\5\q\u\f\0\u\y\k\7\z\q\t\r\o\b\f\4\s\s\8\4\5\n\p\x\c\3\4\e\3\3\g\v\c\e\p\i\g\h\4\o\v\x\3\q\m\p\u\b\q\m\v\4\e\n\5\i\n\z\y\d\0\3\8\f\v\2\6\3\n\0\6\3\c\f\q\d\b\v\y\8\u\0\0\a\4\y\8\g\x\o\f\e\w\c\5\y\y\y\l\y\8\5\z\8\x\f\t\h\q\6\v\k\c\b\7\6\2\a\k\k\l\9\n\5\6\i\e\v\x\q\r\l\r\x\a\e\o\0\5\b\r\9\n\s\t\c\h\m\i\8\n\8\3\l\t\t\9\o\x\5\b\u\e\o\c\g\8\k\x\o\o\6\7\7\2\j\0\u\2\6\d\h\l\y\s\h\n\0\e\z\t\8\4\w\0\w\w\5\2\8\d\y\a\5\3\9\x\b\y\q\0\v\6\r\u\k\a\y\1\o\e\6\v\l\v\p\f\t\a\d\8\f\e\p\z\r\o\a\s\q\x\5\8\w\i\n\2\5\b\q\o\c\x\4\1\i\g\y\9\l\n\t\d\9\t\o\v\n\n\t\7\7\7\b\c\n\p\2\u\t\k\2\6\1\s\n\x\d\0\i\s\t\u\8\0\k\4\6\3\i\z\i\c\1\n\9\q\1\e\g\j\h\s\l\1\p\i\p\f\z\y\9\8\r\i\9\n\t\z\h\b\f\7\y\q\f\u\c\w\e\a\d\y\c\y\m\e\z\o\p\b\z\1\2\h\2\g\7\x\b\a\9\k\4\k\w\e\4\9\p\8\x ]] 00:07:13.928 08:20:05 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:13.928 08:20:05 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:07:13.928 08:20:05 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:07:13.928 08:20:05 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:13.929 08:20:05 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:13.929 08:20:05 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:13.929 [2024-07-15 08:20:05.980377] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:13.929 [2024-07-15 08:20:05.980801] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63335 ] 00:07:14.187 [2024-07-15 08:20:06.116496] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.187 [2024-07-15 08:20:06.231143] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.187 [2024-07-15 08:20:06.283519] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:14.464  Copying: 512/512 [B] (average 500 kBps) 00:07:14.464 00:07:14.464 08:20:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ p9s3zdgv507zoqmk183erd1ktyaek4mvwriwcs2jks40ifirjlc1dm99ra549ss6t4mkxlbs6f7p81elat96hrjohp4fpctuskheu9g1jqdbi1k1ncwqomgk5gr3ngxrew0vtbc3xfzoyakf90mn030gmy7q0kntev87fsungfj3myx1hu30k2gje5gaood07rxrfmrlab9ho79rlkt6qsy20pmuz693pyfowf9rxkw3jvywgt39p7jmlqele6wnnf1srutpzozuetfz5f6xsfwh0mfrcipmtqafb4b2eg030xzkl9sii0eaxpj1sqj6g3ec6834lxkzrmlgi77u7yc74aetjwt91yjlangwg0u6vhhdn6a8xhkwoaqsgyg0fkst6sx2e9rqrveg2eavcqobfs50j1tn0t2zlc524j4wms68kurpmkylcr7jgqcuro1zcsivysji0eou5ep0mz8nn5tm9qj9dmmhy3ngr7uhy6gezucxpjuuod1f1rz2 == \p\9\s\3\z\d\g\v\5\0\7\z\o\q\m\k\1\8\3\e\r\d\1\k\t\y\a\e\k\4\m\v\w\r\i\w\c\s\2\j\k\s\4\0\i\f\i\r\j\l\c\1\d\m\9\9\r\a\5\4\9\s\s\6\t\4\m\k\x\l\b\s\6\f\7\p\8\1\e\l\a\t\9\6\h\r\j\o\h\p\4\f\p\c\t\u\s\k\h\e\u\9\g\1\j\q\d\b\i\1\k\1\n\c\w\q\o\m\g\k\5\g\r\3\n\g\x\r\e\w\0\v\t\b\c\3\x\f\z\o\y\a\k\f\9\0\m\n\0\3\0\g\m\y\7\q\0\k\n\t\e\v\8\7\f\s\u\n\g\f\j\3\m\y\x\1\h\u\3\0\k\2\g\j\e\5\g\a\o\o\d\0\7\r\x\r\f\m\r\l\a\b\9\h\o\7\9\r\l\k\t\6\q\s\y\2\0\p\m\u\z\6\9\3\p\y\f\o\w\f\9\r\x\k\w\3\j\v\y\w\g\t\3\9\p\7\j\m\l\q\e\l\e\6\w\n\n\f\1\s\r\u\t\p\z\o\z\u\e\t\f\z\5\f\6\x\s\f\w\h\0\m\f\r\c\i\p\m\t\q\a\f\b\4\b\2\e\g\0\3\0\x\z\k\l\9\s\i\i\0\e\a\x\p\j\1\s\q\j\6\g\3\e\c\6\8\3\4\l\x\k\z\r\m\l\g\i\7\7\u\7\y\c\7\4\a\e\t\j\w\t\9\1\y\j\l\a\n\g\w\g\0\u\6\v\h\h\d\n\6\a\8\x\h\k\w\o\a\q\s\g\y\g\0\f\k\s\t\6\s\x\2\e\9\r\q\r\v\e\g\2\e\a\v\c\q\o\b\f\s\5\0\j\1\t\n\0\t\2\z\l\c\5\2\4\j\4\w\m\s\6\8\k\u\r\p\m\k\y\l\c\r\7\j\g\q\c\u\r\o\1\z\c\s\i\v\y\s\j\i\0\e\o\u\5\e\p\0\m\z\8\n\n\5\t\m\9\q\j\9\d\m\m\h\y\3\n\g\r\7\u\h\y\6\g\e\z\u\c\x\p\j\u\u\o\d\1\f\1\r\z\2 ]] 00:07:14.464 08:20:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:14.464 08:20:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:14.464 [2024-07-15 08:20:06.584061] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:14.464 [2024-07-15 08:20:06.584168] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63339 ] 00:07:14.722 [2024-07-15 08:20:06.719466] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.722 [2024-07-15 08:20:06.836589] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.722 [2024-07-15 08:20:06.890329] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:14.980  Copying: 512/512 [B] (average 500 kBps) 00:07:14.980 00:07:14.980 08:20:07 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ p9s3zdgv507zoqmk183erd1ktyaek4mvwriwcs2jks40ifirjlc1dm99ra549ss6t4mkxlbs6f7p81elat96hrjohp4fpctuskheu9g1jqdbi1k1ncwqomgk5gr3ngxrew0vtbc3xfzoyakf90mn030gmy7q0kntev87fsungfj3myx1hu30k2gje5gaood07rxrfmrlab9ho79rlkt6qsy20pmuz693pyfowf9rxkw3jvywgt39p7jmlqele6wnnf1srutpzozuetfz5f6xsfwh0mfrcipmtqafb4b2eg030xzkl9sii0eaxpj1sqj6g3ec6834lxkzrmlgi77u7yc74aetjwt91yjlangwg0u6vhhdn6a8xhkwoaqsgyg0fkst6sx2e9rqrveg2eavcqobfs50j1tn0t2zlc524j4wms68kurpmkylcr7jgqcuro1zcsivysji0eou5ep0mz8nn5tm9qj9dmmhy3ngr7uhy6gezucxpjuuod1f1rz2 == \p\9\s\3\z\d\g\v\5\0\7\z\o\q\m\k\1\8\3\e\r\d\1\k\t\y\a\e\k\4\m\v\w\r\i\w\c\s\2\j\k\s\4\0\i\f\i\r\j\l\c\1\d\m\9\9\r\a\5\4\9\s\s\6\t\4\m\k\x\l\b\s\6\f\7\p\8\1\e\l\a\t\9\6\h\r\j\o\h\p\4\f\p\c\t\u\s\k\h\e\u\9\g\1\j\q\d\b\i\1\k\1\n\c\w\q\o\m\g\k\5\g\r\3\n\g\x\r\e\w\0\v\t\b\c\3\x\f\z\o\y\a\k\f\9\0\m\n\0\3\0\g\m\y\7\q\0\k\n\t\e\v\8\7\f\s\u\n\g\f\j\3\m\y\x\1\h\u\3\0\k\2\g\j\e\5\g\a\o\o\d\0\7\r\x\r\f\m\r\l\a\b\9\h\o\7\9\r\l\k\t\6\q\s\y\2\0\p\m\u\z\6\9\3\p\y\f\o\w\f\9\r\x\k\w\3\j\v\y\w\g\t\3\9\p\7\j\m\l\q\e\l\e\6\w\n\n\f\1\s\r\u\t\p\z\o\z\u\e\t\f\z\5\f\6\x\s\f\w\h\0\m\f\r\c\i\p\m\t\q\a\f\b\4\b\2\e\g\0\3\0\x\z\k\l\9\s\i\i\0\e\a\x\p\j\1\s\q\j\6\g\3\e\c\6\8\3\4\l\x\k\z\r\m\l\g\i\7\7\u\7\y\c\7\4\a\e\t\j\w\t\9\1\y\j\l\a\n\g\w\g\0\u\6\v\h\h\d\n\6\a\8\x\h\k\w\o\a\q\s\g\y\g\0\f\k\s\t\6\s\x\2\e\9\r\q\r\v\e\g\2\e\a\v\c\q\o\b\f\s\5\0\j\1\t\n\0\t\2\z\l\c\5\2\4\j\4\w\m\s\6\8\k\u\r\p\m\k\y\l\c\r\7\j\g\q\c\u\r\o\1\z\c\s\i\v\y\s\j\i\0\e\o\u\5\e\p\0\m\z\8\n\n\5\t\m\9\q\j\9\d\m\m\h\y\3\n\g\r\7\u\h\y\6\g\e\z\u\c\x\p\j\u\u\o\d\1\f\1\r\z\2 ]] 00:07:14.980 08:20:07 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:14.980 08:20:07 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:15.239 [2024-07-15 08:20:07.188981] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:15.239 [2024-07-15 08:20:07.189093] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63354 ] 00:07:15.239 [2024-07-15 08:20:07.328614] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.497 [2024-07-15 08:20:07.447306] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.497 [2024-07-15 08:20:07.499430] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:15.756  Copying: 512/512 [B] (average 166 kBps) 00:07:15.756 00:07:15.756 08:20:07 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ p9s3zdgv507zoqmk183erd1ktyaek4mvwriwcs2jks40ifirjlc1dm99ra549ss6t4mkxlbs6f7p81elat96hrjohp4fpctuskheu9g1jqdbi1k1ncwqomgk5gr3ngxrew0vtbc3xfzoyakf90mn030gmy7q0kntev87fsungfj3myx1hu30k2gje5gaood07rxrfmrlab9ho79rlkt6qsy20pmuz693pyfowf9rxkw3jvywgt39p7jmlqele6wnnf1srutpzozuetfz5f6xsfwh0mfrcipmtqafb4b2eg030xzkl9sii0eaxpj1sqj6g3ec6834lxkzrmlgi77u7yc74aetjwt91yjlangwg0u6vhhdn6a8xhkwoaqsgyg0fkst6sx2e9rqrveg2eavcqobfs50j1tn0t2zlc524j4wms68kurpmkylcr7jgqcuro1zcsivysji0eou5ep0mz8nn5tm9qj9dmmhy3ngr7uhy6gezucxpjuuod1f1rz2 == \p\9\s\3\z\d\g\v\5\0\7\z\o\q\m\k\1\8\3\e\r\d\1\k\t\y\a\e\k\4\m\v\w\r\i\w\c\s\2\j\k\s\4\0\i\f\i\r\j\l\c\1\d\m\9\9\r\a\5\4\9\s\s\6\t\4\m\k\x\l\b\s\6\f\7\p\8\1\e\l\a\t\9\6\h\r\j\o\h\p\4\f\p\c\t\u\s\k\h\e\u\9\g\1\j\q\d\b\i\1\k\1\n\c\w\q\o\m\g\k\5\g\r\3\n\g\x\r\e\w\0\v\t\b\c\3\x\f\z\o\y\a\k\f\9\0\m\n\0\3\0\g\m\y\7\q\0\k\n\t\e\v\8\7\f\s\u\n\g\f\j\3\m\y\x\1\h\u\3\0\k\2\g\j\e\5\g\a\o\o\d\0\7\r\x\r\f\m\r\l\a\b\9\h\o\7\9\r\l\k\t\6\q\s\y\2\0\p\m\u\z\6\9\3\p\y\f\o\w\f\9\r\x\k\w\3\j\v\y\w\g\t\3\9\p\7\j\m\l\q\e\l\e\6\w\n\n\f\1\s\r\u\t\p\z\o\z\u\e\t\f\z\5\f\6\x\s\f\w\h\0\m\f\r\c\i\p\m\t\q\a\f\b\4\b\2\e\g\0\3\0\x\z\k\l\9\s\i\i\0\e\a\x\p\j\1\s\q\j\6\g\3\e\c\6\8\3\4\l\x\k\z\r\m\l\g\i\7\7\u\7\y\c\7\4\a\e\t\j\w\t\9\1\y\j\l\a\n\g\w\g\0\u\6\v\h\h\d\n\6\a\8\x\h\k\w\o\a\q\s\g\y\g\0\f\k\s\t\6\s\x\2\e\9\r\q\r\v\e\g\2\e\a\v\c\q\o\b\f\s\5\0\j\1\t\n\0\t\2\z\l\c\5\2\4\j\4\w\m\s\6\8\k\u\r\p\m\k\y\l\c\r\7\j\g\q\c\u\r\o\1\z\c\s\i\v\y\s\j\i\0\e\o\u\5\e\p\0\m\z\8\n\n\5\t\m\9\q\j\9\d\m\m\h\y\3\n\g\r\7\u\h\y\6\g\e\z\u\c\x\p\j\u\u\o\d\1\f\1\r\z\2 ]] 00:07:15.756 08:20:07 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:15.756 08:20:07 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:15.756 [2024-07-15 08:20:07.805773] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:15.756 [2024-07-15 08:20:07.805879] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63358 ] 00:07:16.014 [2024-07-15 08:20:07.946054] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.014 [2024-07-15 08:20:08.064254] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.014 [2024-07-15 08:20:08.116294] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:16.272  Copying: 512/512 [B] (average 250 kBps) 00:07:16.272 00:07:16.272 08:20:08 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ p9s3zdgv507zoqmk183erd1ktyaek4mvwriwcs2jks40ifirjlc1dm99ra549ss6t4mkxlbs6f7p81elat96hrjohp4fpctuskheu9g1jqdbi1k1ncwqomgk5gr3ngxrew0vtbc3xfzoyakf90mn030gmy7q0kntev87fsungfj3myx1hu30k2gje5gaood07rxrfmrlab9ho79rlkt6qsy20pmuz693pyfowf9rxkw3jvywgt39p7jmlqele6wnnf1srutpzozuetfz5f6xsfwh0mfrcipmtqafb4b2eg030xzkl9sii0eaxpj1sqj6g3ec6834lxkzrmlgi77u7yc74aetjwt91yjlangwg0u6vhhdn6a8xhkwoaqsgyg0fkst6sx2e9rqrveg2eavcqobfs50j1tn0t2zlc524j4wms68kurpmkylcr7jgqcuro1zcsivysji0eou5ep0mz8nn5tm9qj9dmmhy3ngr7uhy6gezucxpjuuod1f1rz2 == \p\9\s\3\z\d\g\v\5\0\7\z\o\q\m\k\1\8\3\e\r\d\1\k\t\y\a\e\k\4\m\v\w\r\i\w\c\s\2\j\k\s\4\0\i\f\i\r\j\l\c\1\d\m\9\9\r\a\5\4\9\s\s\6\t\4\m\k\x\l\b\s\6\f\7\p\8\1\e\l\a\t\9\6\h\r\j\o\h\p\4\f\p\c\t\u\s\k\h\e\u\9\g\1\j\q\d\b\i\1\k\1\n\c\w\q\o\m\g\k\5\g\r\3\n\g\x\r\e\w\0\v\t\b\c\3\x\f\z\o\y\a\k\f\9\0\m\n\0\3\0\g\m\y\7\q\0\k\n\t\e\v\8\7\f\s\u\n\g\f\j\3\m\y\x\1\h\u\3\0\k\2\g\j\e\5\g\a\o\o\d\0\7\r\x\r\f\m\r\l\a\b\9\h\o\7\9\r\l\k\t\6\q\s\y\2\0\p\m\u\z\6\9\3\p\y\f\o\w\f\9\r\x\k\w\3\j\v\y\w\g\t\3\9\p\7\j\m\l\q\e\l\e\6\w\n\n\f\1\s\r\u\t\p\z\o\z\u\e\t\f\z\5\f\6\x\s\f\w\h\0\m\f\r\c\i\p\m\t\q\a\f\b\4\b\2\e\g\0\3\0\x\z\k\l\9\s\i\i\0\e\a\x\p\j\1\s\q\j\6\g\3\e\c\6\8\3\4\l\x\k\z\r\m\l\g\i\7\7\u\7\y\c\7\4\a\e\t\j\w\t\9\1\y\j\l\a\n\g\w\g\0\u\6\v\h\h\d\n\6\a\8\x\h\k\w\o\a\q\s\g\y\g\0\f\k\s\t\6\s\x\2\e\9\r\q\r\v\e\g\2\e\a\v\c\q\o\b\f\s\5\0\j\1\t\n\0\t\2\z\l\c\5\2\4\j\4\w\m\s\6\8\k\u\r\p\m\k\y\l\c\r\7\j\g\q\c\u\r\o\1\z\c\s\i\v\y\s\j\i\0\e\o\u\5\e\p\0\m\z\8\n\n\5\t\m\9\q\j\9\d\m\m\h\y\3\n\g\r\7\u\h\y\6\g\e\z\u\c\x\p\j\u\u\o\d\1\f\1\r\z\2 ]] 00:07:16.272 00:07:16.272 real 0m4.911s 00:07:16.272 user 0m2.902s 00:07:16.272 sys 0m2.146s 00:07:16.272 08:20:08 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:16.272 08:20:08 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:16.272 ************************************ 00:07:16.272 END TEST dd_flags_misc 00:07:16.272 ************************************ 00:07:16.272 08:20:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:07:16.272 08:20:08 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:07:16.272 08:20:08 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:07:16.272 * Second test run, disabling liburing, forcing AIO 00:07:16.272 08:20:08 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:07:16.272 08:20:08 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:07:16.272 08:20:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:16.272 08:20:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:16.272 08:20:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:16.273 ************************************ 00:07:16.273 START TEST dd_flag_append_forced_aio 00:07:16.273 ************************************ 00:07:16.273 08:20:08 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1123 -- # append 00:07:16.273 08:20:08 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:07:16.273 08:20:08 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:07:16.273 08:20:08 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:07:16.273 08:20:08 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:16.273 08:20:08 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:16.273 08:20:08 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=7xn6w1n9tbj4is7wbdcs481lchoxjpzp 00:07:16.273 08:20:08 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:07:16.273 08:20:08 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:16.273 08:20:08 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:16.273 08:20:08 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=5rjgppkfa95v9cbpbt7btbx2igrv4rh0 00:07:16.273 08:20:08 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s 7xn6w1n9tbj4is7wbdcs481lchoxjpzp 00:07:16.273 08:20:08 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s 5rjgppkfa95v9cbpbt7btbx2igrv4rh0 00:07:16.273 08:20:08 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:07:16.530 [2024-07-15 08:20:08.490124] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:16.530 [2024-07-15 08:20:08.490234] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63392 ] 00:07:16.530 [2024-07-15 08:20:08.629385] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.788 [2024-07-15 08:20:08.745163] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.788 [2024-07-15 08:20:08.797204] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:17.045  Copying: 32/32 [B] (average 31 kBps) 00:07:17.045 00:07:17.045 08:20:09 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ 5rjgppkfa95v9cbpbt7btbx2igrv4rh07xn6w1n9tbj4is7wbdcs481lchoxjpzp == \5\r\j\g\p\p\k\f\a\9\5\v\9\c\b\p\b\t\7\b\t\b\x\2\i\g\r\v\4\r\h\0\7\x\n\6\w\1\n\9\t\b\j\4\i\s\7\w\b\d\c\s\4\8\1\l\c\h\o\x\j\p\z\p ]] 00:07:17.045 00:07:17.045 real 0m0.657s 00:07:17.045 user 0m0.389s 00:07:17.045 sys 0m0.146s 00:07:17.045 08:20:09 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:17.045 ************************************ 00:07:17.045 END TEST dd_flag_append_forced_aio 00:07:17.045 ************************************ 00:07:17.045 08:20:09 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:17.045 08:20:09 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:07:17.045 08:20:09 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:07:17.045 08:20:09 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:17.045 08:20:09 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.045 08:20:09 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:17.045 ************************************ 00:07:17.045 START TEST dd_flag_directory_forced_aio 00:07:17.045 ************************************ 00:07:17.045 08:20:09 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1123 -- # directory 00:07:17.045 08:20:09 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:17.045 08:20:09 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:07:17.045 08:20:09 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:17.045 08:20:09 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:17.045 08:20:09 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:17.045 08:20:09 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:17.045 08:20:09 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:17.045 08:20:09 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:17.045 08:20:09 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:17.045 08:20:09 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:17.045 08:20:09 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:17.045 08:20:09 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:17.045 [2024-07-15 08:20:09.193564] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:17.045 [2024-07-15 08:20:09.193657] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63424 ] 00:07:17.302 [2024-07-15 08:20:09.330879] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.302 [2024-07-15 08:20:09.461265] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.560 [2024-07-15 08:20:09.518216] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:17.560 [2024-07-15 08:20:09.555244] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:17.560 [2024-07-15 08:20:09.555310] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:17.560 [2024-07-15 08:20:09.555328] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:17.560 [2024-07-15 08:20:09.671606] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:17.817 08:20:09 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # es=236 00:07:17.817 08:20:09 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:17.817 08:20:09 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@660 -- # es=108 00:07:17.817 08:20:09 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:07:17.817 08:20:09 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:07:17.817 08:20:09 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:17.817 08:20:09 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:17.817 08:20:09 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:07:17.817 08:20:09 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:17.817 08:20:09 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:17.817 08:20:09 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:17.817 08:20:09 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:17.817 08:20:09 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:17.817 08:20:09 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:17.817 08:20:09 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:17.817 08:20:09 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:17.817 08:20:09 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:17.817 08:20:09 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:17.817 [2024-07-15 08:20:09.832080] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:17.817 [2024-07-15 08:20:09.832188] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63428 ] 00:07:17.817 [2024-07-15 08:20:09.970328] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.074 [2024-07-15 08:20:10.090262] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.074 [2024-07-15 08:20:10.145206] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:18.074 [2024-07-15 08:20:10.180899] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:18.074 [2024-07-15 08:20:10.180958] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:18.074 [2024-07-15 08:20:10.180975] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:18.331 [2024-07-15 08:20:10.297648] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:18.331 08:20:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # es=236 00:07:18.331 08:20:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:18.331 ************************************ 00:07:18.331 END TEST dd_flag_directory_forced_aio 00:07:18.331 ************************************ 00:07:18.331 08:20:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@660 -- # es=108 00:07:18.331 08:20:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:07:18.331 08:20:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:07:18.331 08:20:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:18.331 00:07:18.331 real 0m1.261s 00:07:18.331 user 0m0.745s 00:07:18.331 sys 0m0.302s 00:07:18.331 08:20:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:18.331 08:20:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:18.331 08:20:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:07:18.331 08:20:10 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:07:18.331 08:20:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:18.331 08:20:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.331 08:20:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:18.331 ************************************ 00:07:18.331 START TEST dd_flag_nofollow_forced_aio 00:07:18.331 ************************************ 00:07:18.331 08:20:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1123 -- # nofollow 00:07:18.331 08:20:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:18.331 08:20:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:18.331 08:20:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:18.331 08:20:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:18.331 08:20:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:18.331 08:20:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:07:18.331 08:20:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:18.331 08:20:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.331 08:20:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:18.331 08:20:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.331 08:20:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:18.331 08:20:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.331 08:20:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:18.331 08:20:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.331 08:20:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:18.331 08:20:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:18.589 [2024-07-15 08:20:10.514317] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:18.589 [2024-07-15 08:20:10.514410] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63462 ] 00:07:18.589 [2024-07-15 08:20:10.651878] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.847 [2024-07-15 08:20:10.781277] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.847 [2024-07-15 08:20:10.838031] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:18.847 [2024-07-15 08:20:10.874927] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:18.847 [2024-07-15 08:20:10.874995] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:18.847 [2024-07-15 08:20:10.875017] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:18.847 [2024-07-15 08:20:10.991073] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:19.104 08:20:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # es=216 00:07:19.104 08:20:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:19.104 08:20:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@660 -- # es=88 00:07:19.104 08:20:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:07:19.104 08:20:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:07:19.104 08:20:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:19.104 08:20:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:19.104 08:20:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:07:19.104 08:20:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:19.104 08:20:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.104 08:20:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:19.104 08:20:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.104 08:20:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:19.104 08:20:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.104 08:20:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:19.104 08:20:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.104 08:20:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:19.104 08:20:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:19.104 [2024-07-15 08:20:11.156928] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:19.105 [2024-07-15 08:20:11.157029] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63472 ] 00:07:19.362 [2024-07-15 08:20:11.297332] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.362 [2024-07-15 08:20:11.417670] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.362 [2024-07-15 08:20:11.471644] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:19.362 [2024-07-15 08:20:11.506743] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:19.362 [2024-07-15 08:20:11.506799] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:19.362 [2024-07-15 08:20:11.506816] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:19.619 [2024-07-15 08:20:11.620120] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:19.619 08:20:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # es=216 00:07:19.619 08:20:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:19.619 08:20:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@660 -- # es=88 00:07:19.619 08:20:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:07:19.619 08:20:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:07:19.619 08:20:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:19.619 08:20:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:07:19.619 08:20:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:19.619 08:20:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:19.619 08:20:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:19.619 [2024-07-15 08:20:11.780938] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:19.619 [2024-07-15 08:20:11.781039] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63479 ] 00:07:19.877 [2024-07-15 08:20:11.915196] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.877 [2024-07-15 08:20:12.033377] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.134 [2024-07-15 08:20:12.086269] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:20.393  Copying: 512/512 [B] (average 500 kBps) 00:07:20.393 00:07:20.393 08:20:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ m6z83b7c2mhoiybdrufu964945tvex7twbuhrgb3oq3lt4oniya5a8hja389vvunoavcuyov3ezqi0p1cey8yyd6f23skf7a92i3vk39yemhwgpmotymcg787eot3a34nfnsleghw6hb5ya0jtjlzdm9bj6b8ycm0ce0aje6u7bit13ypj1zbgsh9z3x5o3xirlexmfi3u922ycc070qfwu5ywcarou0e71gm6p1rn42qd4buhohnu26qk4atqwebi2q6i2gg8gzsf7xlp5a4if3cxit3lnxvr7hkinknn8a3n3pzvd235capw3dyhagj8sqftatckmuvjtousop34f8t2s972ulspplxnrgzyulnhh45df13nshudt2nxrjdjofcc5m5zuurubq1wztvlkw7vfl0mt2o1otsmk7ykpk2k1kiyrae20nshe7vnkqodblx0pdph4zy1gd6z01rogoo7bg0zj1hkog1aw7iq3aag1luyw5x3urw6bbx3d2 == \m\6\z\8\3\b\7\c\2\m\h\o\i\y\b\d\r\u\f\u\9\6\4\9\4\5\t\v\e\x\7\t\w\b\u\h\r\g\b\3\o\q\3\l\t\4\o\n\i\y\a\5\a\8\h\j\a\3\8\9\v\v\u\n\o\a\v\c\u\y\o\v\3\e\z\q\i\0\p\1\c\e\y\8\y\y\d\6\f\2\3\s\k\f\7\a\9\2\i\3\v\k\3\9\y\e\m\h\w\g\p\m\o\t\y\m\c\g\7\8\7\e\o\t\3\a\3\4\n\f\n\s\l\e\g\h\w\6\h\b\5\y\a\0\j\t\j\l\z\d\m\9\b\j\6\b\8\y\c\m\0\c\e\0\a\j\e\6\u\7\b\i\t\1\3\y\p\j\1\z\b\g\s\h\9\z\3\x\5\o\3\x\i\r\l\e\x\m\f\i\3\u\9\2\2\y\c\c\0\7\0\q\f\w\u\5\y\w\c\a\r\o\u\0\e\7\1\g\m\6\p\1\r\n\4\2\q\d\4\b\u\h\o\h\n\u\2\6\q\k\4\a\t\q\w\e\b\i\2\q\6\i\2\g\g\8\g\z\s\f\7\x\l\p\5\a\4\i\f\3\c\x\i\t\3\l\n\x\v\r\7\h\k\i\n\k\n\n\8\a\3\n\3\p\z\v\d\2\3\5\c\a\p\w\3\d\y\h\a\g\j\8\s\q\f\t\a\t\c\k\m\u\v\j\t\o\u\s\o\p\3\4\f\8\t\2\s\9\7\2\u\l\s\p\p\l\x\n\r\g\z\y\u\l\n\h\h\4\5\d\f\1\3\n\s\h\u\d\t\2\n\x\r\j\d\j\o\f\c\c\5\m\5\z\u\u\r\u\b\q\1\w\z\t\v\l\k\w\7\v\f\l\0\m\t\2\o\1\o\t\s\m\k\7\y\k\p\k\2\k\1\k\i\y\r\a\e\2\0\n\s\h\e\7\v\n\k\q\o\d\b\l\x\0\p\d\p\h\4\z\y\1\g\d\6\z\0\1\r\o\g\o\o\7\b\g\0\z\j\1\h\k\o\g\1\a\w\7\i\q\3\a\a\g\1\l\u\y\w\5\x\3\u\r\w\6\b\b\x\3\d\2 ]] 00:07:20.393 00:07:20.393 real 0m1.900s 00:07:20.393 user 0m1.126s 00:07:20.393 sys 0m0.435s 00:07:20.393 08:20:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:20.393 08:20:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:20.393 ************************************ 00:07:20.393 END TEST dd_flag_nofollow_forced_aio 00:07:20.393 ************************************ 00:07:20.393 08:20:12 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:07:20.393 08:20:12 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:07:20.393 08:20:12 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:20.393 08:20:12 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:20.393 08:20:12 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:20.393 ************************************ 00:07:20.393 START TEST dd_flag_noatime_forced_aio 00:07:20.393 ************************************ 00:07:20.393 08:20:12 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1123 -- # noatime 00:07:20.393 08:20:12 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:07:20.393 08:20:12 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:07:20.393 08:20:12 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:07:20.393 08:20:12 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:20.393 08:20:12 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:20.393 08:20:12 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:20.393 08:20:12 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1721031612 00:07:20.393 08:20:12 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:20.393 08:20:12 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1721031612 00:07:20.393 08:20:12 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:07:21.323 08:20:13 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:21.324 [2024-07-15 08:20:13.492170] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:21.324 [2024-07-15 08:20:13.492292] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63525 ] 00:07:21.580 [2024-07-15 08:20:13.636510] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.837 [2024-07-15 08:20:13.751342] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.837 [2024-07-15 08:20:13.803035] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:22.096  Copying: 512/512 [B] (average 500 kBps) 00:07:22.096 00:07:22.096 08:20:14 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:22.096 08:20:14 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1721031612 )) 00:07:22.096 08:20:14 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:22.096 08:20:14 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1721031612 )) 00:07:22.096 08:20:14 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:22.096 [2024-07-15 08:20:14.155529] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:22.096 [2024-07-15 08:20:14.155622] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63537 ] 00:07:22.352 [2024-07-15 08:20:14.288936] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.352 [2024-07-15 08:20:14.405761] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.352 [2024-07-15 08:20:14.457810] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:22.609  Copying: 512/512 [B] (average 500 kBps) 00:07:22.609 00:07:22.609 08:20:14 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:22.609 ************************************ 00:07:22.609 END TEST dd_flag_noatime_forced_aio 00:07:22.609 ************************************ 00:07:22.609 08:20:14 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1721031614 )) 00:07:22.609 00:07:22.609 real 0m2.342s 00:07:22.609 user 0m0.780s 00:07:22.609 sys 0m0.314s 00:07:22.609 08:20:14 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:22.609 08:20:14 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:22.868 08:20:14 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:07:22.868 08:20:14 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:07:22.868 08:20:14 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:22.868 08:20:14 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.868 08:20:14 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:22.868 ************************************ 00:07:22.868 START TEST dd_flags_misc_forced_aio 00:07:22.868 ************************************ 00:07:22.868 08:20:14 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1123 -- # io 00:07:22.868 08:20:14 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:07:22.868 08:20:14 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:07:22.868 08:20:14 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:07:22.868 08:20:14 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:22.868 08:20:14 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:07:22.868 08:20:14 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:22.868 08:20:14 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:22.868 08:20:14 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:22.868 08:20:14 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:22.868 [2024-07-15 08:20:14.856593] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:22.868 [2024-07-15 08:20:14.856703] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63563 ] 00:07:22.868 [2024-07-15 08:20:14.995169] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.126 [2024-07-15 08:20:15.121211] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.126 [2024-07-15 08:20:15.176163] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:23.383  Copying: 512/512 [B] (average 500 kBps) 00:07:23.383 00:07:23.383 08:20:15 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 3lcwl8svsyqpggyd2lyr6yswi1p1t2jfseerzs0xxg81c1lotbc0hyb4bby9yvel1vnotnvl7bj74ni8l0gkcz4w7swilkoytck4jou2llld3dqotewvaz7rz91i2cbxvib04huty18xmm65zj6eik3ujetcszms9jpcq2diw7ysc0bjeu2dmszi05ip5xhzjh2b7rrt6nbw4nr5tmnt5eevtd0moiyle7v3m12e8vqa6eopvcqilhgt8t133wkkl2enrgxbhts26lmyrgjm2iia8t1r8l8hp5l5vvblp8fqvs23bc2radsredf5379ut7sbcw3r3vowkubq71slpic4577247mdmboix5vt49yp1y3rjhi1z12cdibu210sirjdf14loj3aywvukzwmnsmbg7yslyl7g5o1mnn82ty8hpdi41v146lqei1hlgj0wydjif9gl3o1ijp6yxn3yi4byh5y2iuap27qll35fqtvy1pmyre9uuyywymstds2 == \3\l\c\w\l\8\s\v\s\y\q\p\g\g\y\d\2\l\y\r\6\y\s\w\i\1\p\1\t\2\j\f\s\e\e\r\z\s\0\x\x\g\8\1\c\1\l\o\t\b\c\0\h\y\b\4\b\b\y\9\y\v\e\l\1\v\n\o\t\n\v\l\7\b\j\7\4\n\i\8\l\0\g\k\c\z\4\w\7\s\w\i\l\k\o\y\t\c\k\4\j\o\u\2\l\l\l\d\3\d\q\o\t\e\w\v\a\z\7\r\z\9\1\i\2\c\b\x\v\i\b\0\4\h\u\t\y\1\8\x\m\m\6\5\z\j\6\e\i\k\3\u\j\e\t\c\s\z\m\s\9\j\p\c\q\2\d\i\w\7\y\s\c\0\b\j\e\u\2\d\m\s\z\i\0\5\i\p\5\x\h\z\j\h\2\b\7\r\r\t\6\n\b\w\4\n\r\5\t\m\n\t\5\e\e\v\t\d\0\m\o\i\y\l\e\7\v\3\m\1\2\e\8\v\q\a\6\e\o\p\v\c\q\i\l\h\g\t\8\t\1\3\3\w\k\k\l\2\e\n\r\g\x\b\h\t\s\2\6\l\m\y\r\g\j\m\2\i\i\a\8\t\1\r\8\l\8\h\p\5\l\5\v\v\b\l\p\8\f\q\v\s\2\3\b\c\2\r\a\d\s\r\e\d\f\5\3\7\9\u\t\7\s\b\c\w\3\r\3\v\o\w\k\u\b\q\7\1\s\l\p\i\c\4\5\7\7\2\4\7\m\d\m\b\o\i\x\5\v\t\4\9\y\p\1\y\3\r\j\h\i\1\z\1\2\c\d\i\b\u\2\1\0\s\i\r\j\d\f\1\4\l\o\j\3\a\y\w\v\u\k\z\w\m\n\s\m\b\g\7\y\s\l\y\l\7\g\5\o\1\m\n\n\8\2\t\y\8\h\p\d\i\4\1\v\1\4\6\l\q\e\i\1\h\l\g\j\0\w\y\d\j\i\f\9\g\l\3\o\1\i\j\p\6\y\x\n\3\y\i\4\b\y\h\5\y\2\i\u\a\p\2\7\q\l\l\3\5\f\q\t\v\y\1\p\m\y\r\e\9\u\u\y\y\w\y\m\s\t\d\s\2 ]] 00:07:23.383 08:20:15 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:23.383 08:20:15 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:23.383 [2024-07-15 08:20:15.512597] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:23.383 [2024-07-15 08:20:15.512701] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63575 ] 00:07:23.641 [2024-07-15 08:20:15.650630] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.641 [2024-07-15 08:20:15.766401] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.898 [2024-07-15 08:20:15.818460] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:24.157  Copying: 512/512 [B] (average 500 kBps) 00:07:24.157 00:07:24.157 08:20:16 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 3lcwl8svsyqpggyd2lyr6yswi1p1t2jfseerzs0xxg81c1lotbc0hyb4bby9yvel1vnotnvl7bj74ni8l0gkcz4w7swilkoytck4jou2llld3dqotewvaz7rz91i2cbxvib04huty18xmm65zj6eik3ujetcszms9jpcq2diw7ysc0bjeu2dmszi05ip5xhzjh2b7rrt6nbw4nr5tmnt5eevtd0moiyle7v3m12e8vqa6eopvcqilhgt8t133wkkl2enrgxbhts26lmyrgjm2iia8t1r8l8hp5l5vvblp8fqvs23bc2radsredf5379ut7sbcw3r3vowkubq71slpic4577247mdmboix5vt49yp1y3rjhi1z12cdibu210sirjdf14loj3aywvukzwmnsmbg7yslyl7g5o1mnn82ty8hpdi41v146lqei1hlgj0wydjif9gl3o1ijp6yxn3yi4byh5y2iuap27qll35fqtvy1pmyre9uuyywymstds2 == \3\l\c\w\l\8\s\v\s\y\q\p\g\g\y\d\2\l\y\r\6\y\s\w\i\1\p\1\t\2\j\f\s\e\e\r\z\s\0\x\x\g\8\1\c\1\l\o\t\b\c\0\h\y\b\4\b\b\y\9\y\v\e\l\1\v\n\o\t\n\v\l\7\b\j\7\4\n\i\8\l\0\g\k\c\z\4\w\7\s\w\i\l\k\o\y\t\c\k\4\j\o\u\2\l\l\l\d\3\d\q\o\t\e\w\v\a\z\7\r\z\9\1\i\2\c\b\x\v\i\b\0\4\h\u\t\y\1\8\x\m\m\6\5\z\j\6\e\i\k\3\u\j\e\t\c\s\z\m\s\9\j\p\c\q\2\d\i\w\7\y\s\c\0\b\j\e\u\2\d\m\s\z\i\0\5\i\p\5\x\h\z\j\h\2\b\7\r\r\t\6\n\b\w\4\n\r\5\t\m\n\t\5\e\e\v\t\d\0\m\o\i\y\l\e\7\v\3\m\1\2\e\8\v\q\a\6\e\o\p\v\c\q\i\l\h\g\t\8\t\1\3\3\w\k\k\l\2\e\n\r\g\x\b\h\t\s\2\6\l\m\y\r\g\j\m\2\i\i\a\8\t\1\r\8\l\8\h\p\5\l\5\v\v\b\l\p\8\f\q\v\s\2\3\b\c\2\r\a\d\s\r\e\d\f\5\3\7\9\u\t\7\s\b\c\w\3\r\3\v\o\w\k\u\b\q\7\1\s\l\p\i\c\4\5\7\7\2\4\7\m\d\m\b\o\i\x\5\v\t\4\9\y\p\1\y\3\r\j\h\i\1\z\1\2\c\d\i\b\u\2\1\0\s\i\r\j\d\f\1\4\l\o\j\3\a\y\w\v\u\k\z\w\m\n\s\m\b\g\7\y\s\l\y\l\7\g\5\o\1\m\n\n\8\2\t\y\8\h\p\d\i\4\1\v\1\4\6\l\q\e\i\1\h\l\g\j\0\w\y\d\j\i\f\9\g\l\3\o\1\i\j\p\6\y\x\n\3\y\i\4\b\y\h\5\y\2\i\u\a\p\2\7\q\l\l\3\5\f\q\t\v\y\1\p\m\y\r\e\9\u\u\y\y\w\y\m\s\t\d\s\2 ]] 00:07:24.157 08:20:16 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:24.157 08:20:16 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:24.157 [2024-07-15 08:20:16.142793] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:24.157 [2024-07-15 08:20:16.142926] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63584 ] 00:07:24.157 [2024-07-15 08:20:16.285741] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.416 [2024-07-15 08:20:16.400205] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.416 [2024-07-15 08:20:16.451693] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:24.674  Copying: 512/512 [B] (average 166 kBps) 00:07:24.674 00:07:24.674 08:20:16 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 3lcwl8svsyqpggyd2lyr6yswi1p1t2jfseerzs0xxg81c1lotbc0hyb4bby9yvel1vnotnvl7bj74ni8l0gkcz4w7swilkoytck4jou2llld3dqotewvaz7rz91i2cbxvib04huty18xmm65zj6eik3ujetcszms9jpcq2diw7ysc0bjeu2dmszi05ip5xhzjh2b7rrt6nbw4nr5tmnt5eevtd0moiyle7v3m12e8vqa6eopvcqilhgt8t133wkkl2enrgxbhts26lmyrgjm2iia8t1r8l8hp5l5vvblp8fqvs23bc2radsredf5379ut7sbcw3r3vowkubq71slpic4577247mdmboix5vt49yp1y3rjhi1z12cdibu210sirjdf14loj3aywvukzwmnsmbg7yslyl7g5o1mnn82ty8hpdi41v146lqei1hlgj0wydjif9gl3o1ijp6yxn3yi4byh5y2iuap27qll35fqtvy1pmyre9uuyywymstds2 == \3\l\c\w\l\8\s\v\s\y\q\p\g\g\y\d\2\l\y\r\6\y\s\w\i\1\p\1\t\2\j\f\s\e\e\r\z\s\0\x\x\g\8\1\c\1\l\o\t\b\c\0\h\y\b\4\b\b\y\9\y\v\e\l\1\v\n\o\t\n\v\l\7\b\j\7\4\n\i\8\l\0\g\k\c\z\4\w\7\s\w\i\l\k\o\y\t\c\k\4\j\o\u\2\l\l\l\d\3\d\q\o\t\e\w\v\a\z\7\r\z\9\1\i\2\c\b\x\v\i\b\0\4\h\u\t\y\1\8\x\m\m\6\5\z\j\6\e\i\k\3\u\j\e\t\c\s\z\m\s\9\j\p\c\q\2\d\i\w\7\y\s\c\0\b\j\e\u\2\d\m\s\z\i\0\5\i\p\5\x\h\z\j\h\2\b\7\r\r\t\6\n\b\w\4\n\r\5\t\m\n\t\5\e\e\v\t\d\0\m\o\i\y\l\e\7\v\3\m\1\2\e\8\v\q\a\6\e\o\p\v\c\q\i\l\h\g\t\8\t\1\3\3\w\k\k\l\2\e\n\r\g\x\b\h\t\s\2\6\l\m\y\r\g\j\m\2\i\i\a\8\t\1\r\8\l\8\h\p\5\l\5\v\v\b\l\p\8\f\q\v\s\2\3\b\c\2\r\a\d\s\r\e\d\f\5\3\7\9\u\t\7\s\b\c\w\3\r\3\v\o\w\k\u\b\q\7\1\s\l\p\i\c\4\5\7\7\2\4\7\m\d\m\b\o\i\x\5\v\t\4\9\y\p\1\y\3\r\j\h\i\1\z\1\2\c\d\i\b\u\2\1\0\s\i\r\j\d\f\1\4\l\o\j\3\a\y\w\v\u\k\z\w\m\n\s\m\b\g\7\y\s\l\y\l\7\g\5\o\1\m\n\n\8\2\t\y\8\h\p\d\i\4\1\v\1\4\6\l\q\e\i\1\h\l\g\j\0\w\y\d\j\i\f\9\g\l\3\o\1\i\j\p\6\y\x\n\3\y\i\4\b\y\h\5\y\2\i\u\a\p\2\7\q\l\l\3\5\f\q\t\v\y\1\p\m\y\r\e\9\u\u\y\y\w\y\m\s\t\d\s\2 ]] 00:07:24.674 08:20:16 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:24.674 08:20:16 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:24.674 [2024-07-15 08:20:16.793925] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:24.674 [2024-07-15 08:20:16.794032] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63591 ] 00:07:24.933 [2024-07-15 08:20:16.934808] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.933 [2024-07-15 08:20:17.077223] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.192 [2024-07-15 08:20:17.131177] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:25.450  Copying: 512/512 [B] (average 500 kBps) 00:07:25.450 00:07:25.450 08:20:17 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 3lcwl8svsyqpggyd2lyr6yswi1p1t2jfseerzs0xxg81c1lotbc0hyb4bby9yvel1vnotnvl7bj74ni8l0gkcz4w7swilkoytck4jou2llld3dqotewvaz7rz91i2cbxvib04huty18xmm65zj6eik3ujetcszms9jpcq2diw7ysc0bjeu2dmszi05ip5xhzjh2b7rrt6nbw4nr5tmnt5eevtd0moiyle7v3m12e8vqa6eopvcqilhgt8t133wkkl2enrgxbhts26lmyrgjm2iia8t1r8l8hp5l5vvblp8fqvs23bc2radsredf5379ut7sbcw3r3vowkubq71slpic4577247mdmboix5vt49yp1y3rjhi1z12cdibu210sirjdf14loj3aywvukzwmnsmbg7yslyl7g5o1mnn82ty8hpdi41v146lqei1hlgj0wydjif9gl3o1ijp6yxn3yi4byh5y2iuap27qll35fqtvy1pmyre9uuyywymstds2 == \3\l\c\w\l\8\s\v\s\y\q\p\g\g\y\d\2\l\y\r\6\y\s\w\i\1\p\1\t\2\j\f\s\e\e\r\z\s\0\x\x\g\8\1\c\1\l\o\t\b\c\0\h\y\b\4\b\b\y\9\y\v\e\l\1\v\n\o\t\n\v\l\7\b\j\7\4\n\i\8\l\0\g\k\c\z\4\w\7\s\w\i\l\k\o\y\t\c\k\4\j\o\u\2\l\l\l\d\3\d\q\o\t\e\w\v\a\z\7\r\z\9\1\i\2\c\b\x\v\i\b\0\4\h\u\t\y\1\8\x\m\m\6\5\z\j\6\e\i\k\3\u\j\e\t\c\s\z\m\s\9\j\p\c\q\2\d\i\w\7\y\s\c\0\b\j\e\u\2\d\m\s\z\i\0\5\i\p\5\x\h\z\j\h\2\b\7\r\r\t\6\n\b\w\4\n\r\5\t\m\n\t\5\e\e\v\t\d\0\m\o\i\y\l\e\7\v\3\m\1\2\e\8\v\q\a\6\e\o\p\v\c\q\i\l\h\g\t\8\t\1\3\3\w\k\k\l\2\e\n\r\g\x\b\h\t\s\2\6\l\m\y\r\g\j\m\2\i\i\a\8\t\1\r\8\l\8\h\p\5\l\5\v\v\b\l\p\8\f\q\v\s\2\3\b\c\2\r\a\d\s\r\e\d\f\5\3\7\9\u\t\7\s\b\c\w\3\r\3\v\o\w\k\u\b\q\7\1\s\l\p\i\c\4\5\7\7\2\4\7\m\d\m\b\o\i\x\5\v\t\4\9\y\p\1\y\3\r\j\h\i\1\z\1\2\c\d\i\b\u\2\1\0\s\i\r\j\d\f\1\4\l\o\j\3\a\y\w\v\u\k\z\w\m\n\s\m\b\g\7\y\s\l\y\l\7\g\5\o\1\m\n\n\8\2\t\y\8\h\p\d\i\4\1\v\1\4\6\l\q\e\i\1\h\l\g\j\0\w\y\d\j\i\f\9\g\l\3\o\1\i\j\p\6\y\x\n\3\y\i\4\b\y\h\5\y\2\i\u\a\p\2\7\q\l\l\3\5\f\q\t\v\y\1\p\m\y\r\e\9\u\u\y\y\w\y\m\s\t\d\s\2 ]] 00:07:25.450 08:20:17 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:25.450 08:20:17 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:07:25.450 08:20:17 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:25.450 08:20:17 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:25.450 08:20:17 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:25.450 08:20:17 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:25.451 [2024-07-15 08:20:17.464474] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:25.451 [2024-07-15 08:20:17.464580] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63599 ] 00:07:25.451 [2024-07-15 08:20:17.601992] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.711 [2024-07-15 08:20:17.717347] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.711 [2024-07-15 08:20:17.769358] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:25.969  Copying: 512/512 [B] (average 500 kBps) 00:07:25.969 00:07:25.969 08:20:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ jfz81g0auyvm6kwq06q8gbd6sz44ban2guvuutf7yled7tjfjftqvpy803vf8kpklz2b8wqaq9qo0dzh9gw9zrel8hd1i5ouhx9w5fnr4q2s6m252znkd0q2roxmygagvokswr4p5t1l8ktq9o174tq0zj77wqm4fiyk1c0z87ra9n1re42jyirii29geme45ku7i0l2m5pc9i8xknndk7r9eg7kjf9bwcvgjo8nghwivafh3dynitv2nug6z0dg7d09dkd3xel7qdruvpr6i9bvvud7ma89wxjvsnbkul35uytfqnia03vtib06utn8sh7ehhac3va9tja0gl30do1qta3yuz40engmz4zv7wiua4ve4cubkpnq3zmikyf61u8vsj6xgv2dtrwriho3y8w6jb6pkzzw6fwlzr2xknkkbv9q67vp7cymuzi5zi1grfy8i9umptvq9yvwx8ze3nlx4cuxscdy3qcf8uk76o2pza04etoze1vurscp95m0 == \j\f\z\8\1\g\0\a\u\y\v\m\6\k\w\q\0\6\q\8\g\b\d\6\s\z\4\4\b\a\n\2\g\u\v\u\u\t\f\7\y\l\e\d\7\t\j\f\j\f\t\q\v\p\y\8\0\3\v\f\8\k\p\k\l\z\2\b\8\w\q\a\q\9\q\o\0\d\z\h\9\g\w\9\z\r\e\l\8\h\d\1\i\5\o\u\h\x\9\w\5\f\n\r\4\q\2\s\6\m\2\5\2\z\n\k\d\0\q\2\r\o\x\m\y\g\a\g\v\o\k\s\w\r\4\p\5\t\1\l\8\k\t\q\9\o\1\7\4\t\q\0\z\j\7\7\w\q\m\4\f\i\y\k\1\c\0\z\8\7\r\a\9\n\1\r\e\4\2\j\y\i\r\i\i\2\9\g\e\m\e\4\5\k\u\7\i\0\l\2\m\5\p\c\9\i\8\x\k\n\n\d\k\7\r\9\e\g\7\k\j\f\9\b\w\c\v\g\j\o\8\n\g\h\w\i\v\a\f\h\3\d\y\n\i\t\v\2\n\u\g\6\z\0\d\g\7\d\0\9\d\k\d\3\x\e\l\7\q\d\r\u\v\p\r\6\i\9\b\v\v\u\d\7\m\a\8\9\w\x\j\v\s\n\b\k\u\l\3\5\u\y\t\f\q\n\i\a\0\3\v\t\i\b\0\6\u\t\n\8\s\h\7\e\h\h\a\c\3\v\a\9\t\j\a\0\g\l\3\0\d\o\1\q\t\a\3\y\u\z\4\0\e\n\g\m\z\4\z\v\7\w\i\u\a\4\v\e\4\c\u\b\k\p\n\q\3\z\m\i\k\y\f\6\1\u\8\v\s\j\6\x\g\v\2\d\t\r\w\r\i\h\o\3\y\8\w\6\j\b\6\p\k\z\z\w\6\f\w\l\z\r\2\x\k\n\k\k\b\v\9\q\6\7\v\p\7\c\y\m\u\z\i\5\z\i\1\g\r\f\y\8\i\9\u\m\p\t\v\q\9\y\v\w\x\8\z\e\3\n\l\x\4\c\u\x\s\c\d\y\3\q\c\f\8\u\k\7\6\o\2\p\z\a\0\4\e\t\o\z\e\1\v\u\r\s\c\p\9\5\m\0 ]] 00:07:25.969 08:20:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:25.969 08:20:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:25.969 [2024-07-15 08:20:18.092341] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:25.969 [2024-07-15 08:20:18.092438] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63611 ] 00:07:26.233 [2024-07-15 08:20:18.236464] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.234 [2024-07-15 08:20:18.352841] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.495 [2024-07-15 08:20:18.404659] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:26.753  Copying: 512/512 [B] (average 500 kBps) 00:07:26.753 00:07:26.753 08:20:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ jfz81g0auyvm6kwq06q8gbd6sz44ban2guvuutf7yled7tjfjftqvpy803vf8kpklz2b8wqaq9qo0dzh9gw9zrel8hd1i5ouhx9w5fnr4q2s6m252znkd0q2roxmygagvokswr4p5t1l8ktq9o174tq0zj77wqm4fiyk1c0z87ra9n1re42jyirii29geme45ku7i0l2m5pc9i8xknndk7r9eg7kjf9bwcvgjo8nghwivafh3dynitv2nug6z0dg7d09dkd3xel7qdruvpr6i9bvvud7ma89wxjvsnbkul35uytfqnia03vtib06utn8sh7ehhac3va9tja0gl30do1qta3yuz40engmz4zv7wiua4ve4cubkpnq3zmikyf61u8vsj6xgv2dtrwriho3y8w6jb6pkzzw6fwlzr2xknkkbv9q67vp7cymuzi5zi1grfy8i9umptvq9yvwx8ze3nlx4cuxscdy3qcf8uk76o2pza04etoze1vurscp95m0 == \j\f\z\8\1\g\0\a\u\y\v\m\6\k\w\q\0\6\q\8\g\b\d\6\s\z\4\4\b\a\n\2\g\u\v\u\u\t\f\7\y\l\e\d\7\t\j\f\j\f\t\q\v\p\y\8\0\3\v\f\8\k\p\k\l\z\2\b\8\w\q\a\q\9\q\o\0\d\z\h\9\g\w\9\z\r\e\l\8\h\d\1\i\5\o\u\h\x\9\w\5\f\n\r\4\q\2\s\6\m\2\5\2\z\n\k\d\0\q\2\r\o\x\m\y\g\a\g\v\o\k\s\w\r\4\p\5\t\1\l\8\k\t\q\9\o\1\7\4\t\q\0\z\j\7\7\w\q\m\4\f\i\y\k\1\c\0\z\8\7\r\a\9\n\1\r\e\4\2\j\y\i\r\i\i\2\9\g\e\m\e\4\5\k\u\7\i\0\l\2\m\5\p\c\9\i\8\x\k\n\n\d\k\7\r\9\e\g\7\k\j\f\9\b\w\c\v\g\j\o\8\n\g\h\w\i\v\a\f\h\3\d\y\n\i\t\v\2\n\u\g\6\z\0\d\g\7\d\0\9\d\k\d\3\x\e\l\7\q\d\r\u\v\p\r\6\i\9\b\v\v\u\d\7\m\a\8\9\w\x\j\v\s\n\b\k\u\l\3\5\u\y\t\f\q\n\i\a\0\3\v\t\i\b\0\6\u\t\n\8\s\h\7\e\h\h\a\c\3\v\a\9\t\j\a\0\g\l\3\0\d\o\1\q\t\a\3\y\u\z\4\0\e\n\g\m\z\4\z\v\7\w\i\u\a\4\v\e\4\c\u\b\k\p\n\q\3\z\m\i\k\y\f\6\1\u\8\v\s\j\6\x\g\v\2\d\t\r\w\r\i\h\o\3\y\8\w\6\j\b\6\p\k\z\z\w\6\f\w\l\z\r\2\x\k\n\k\k\b\v\9\q\6\7\v\p\7\c\y\m\u\z\i\5\z\i\1\g\r\f\y\8\i\9\u\m\p\t\v\q\9\y\v\w\x\8\z\e\3\n\l\x\4\c\u\x\s\c\d\y\3\q\c\f\8\u\k\7\6\o\2\p\z\a\0\4\e\t\o\z\e\1\v\u\r\s\c\p\9\5\m\0 ]] 00:07:26.753 08:20:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:26.753 08:20:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:26.753 [2024-07-15 08:20:18.729633] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:26.753 [2024-07-15 08:20:18.729749] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63614 ] 00:07:26.753 [2024-07-15 08:20:18.869139] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.012 [2024-07-15 08:20:18.984315] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.012 [2024-07-15 08:20:19.036287] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:27.271  Copying: 512/512 [B] (average 250 kBps) 00:07:27.271 00:07:27.271 08:20:19 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ jfz81g0auyvm6kwq06q8gbd6sz44ban2guvuutf7yled7tjfjftqvpy803vf8kpklz2b8wqaq9qo0dzh9gw9zrel8hd1i5ouhx9w5fnr4q2s6m252znkd0q2roxmygagvokswr4p5t1l8ktq9o174tq0zj77wqm4fiyk1c0z87ra9n1re42jyirii29geme45ku7i0l2m5pc9i8xknndk7r9eg7kjf9bwcvgjo8nghwivafh3dynitv2nug6z0dg7d09dkd3xel7qdruvpr6i9bvvud7ma89wxjvsnbkul35uytfqnia03vtib06utn8sh7ehhac3va9tja0gl30do1qta3yuz40engmz4zv7wiua4ve4cubkpnq3zmikyf61u8vsj6xgv2dtrwriho3y8w6jb6pkzzw6fwlzr2xknkkbv9q67vp7cymuzi5zi1grfy8i9umptvq9yvwx8ze3nlx4cuxscdy3qcf8uk76o2pza04etoze1vurscp95m0 == \j\f\z\8\1\g\0\a\u\y\v\m\6\k\w\q\0\6\q\8\g\b\d\6\s\z\4\4\b\a\n\2\g\u\v\u\u\t\f\7\y\l\e\d\7\t\j\f\j\f\t\q\v\p\y\8\0\3\v\f\8\k\p\k\l\z\2\b\8\w\q\a\q\9\q\o\0\d\z\h\9\g\w\9\z\r\e\l\8\h\d\1\i\5\o\u\h\x\9\w\5\f\n\r\4\q\2\s\6\m\2\5\2\z\n\k\d\0\q\2\r\o\x\m\y\g\a\g\v\o\k\s\w\r\4\p\5\t\1\l\8\k\t\q\9\o\1\7\4\t\q\0\z\j\7\7\w\q\m\4\f\i\y\k\1\c\0\z\8\7\r\a\9\n\1\r\e\4\2\j\y\i\r\i\i\2\9\g\e\m\e\4\5\k\u\7\i\0\l\2\m\5\p\c\9\i\8\x\k\n\n\d\k\7\r\9\e\g\7\k\j\f\9\b\w\c\v\g\j\o\8\n\g\h\w\i\v\a\f\h\3\d\y\n\i\t\v\2\n\u\g\6\z\0\d\g\7\d\0\9\d\k\d\3\x\e\l\7\q\d\r\u\v\p\r\6\i\9\b\v\v\u\d\7\m\a\8\9\w\x\j\v\s\n\b\k\u\l\3\5\u\y\t\f\q\n\i\a\0\3\v\t\i\b\0\6\u\t\n\8\s\h\7\e\h\h\a\c\3\v\a\9\t\j\a\0\g\l\3\0\d\o\1\q\t\a\3\y\u\z\4\0\e\n\g\m\z\4\z\v\7\w\i\u\a\4\v\e\4\c\u\b\k\p\n\q\3\z\m\i\k\y\f\6\1\u\8\v\s\j\6\x\g\v\2\d\t\r\w\r\i\h\o\3\y\8\w\6\j\b\6\p\k\z\z\w\6\f\w\l\z\r\2\x\k\n\k\k\b\v\9\q\6\7\v\p\7\c\y\m\u\z\i\5\z\i\1\g\r\f\y\8\i\9\u\m\p\t\v\q\9\y\v\w\x\8\z\e\3\n\l\x\4\c\u\x\s\c\d\y\3\q\c\f\8\u\k\7\6\o\2\p\z\a\0\4\e\t\o\z\e\1\v\u\r\s\c\p\9\5\m\0 ]] 00:07:27.271 08:20:19 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:27.271 08:20:19 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:27.271 [2024-07-15 08:20:19.374023] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:27.271 [2024-07-15 08:20:19.374123] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63627 ] 00:07:27.530 [2024-07-15 08:20:19.511035] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.530 [2024-07-15 08:20:19.627438] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.530 [2024-07-15 08:20:19.679378] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:27.857  Copying: 512/512 [B] (average 250 kBps) 00:07:27.857 00:07:27.857 ************************************ 00:07:27.857 END TEST dd_flags_misc_forced_aio 00:07:27.857 ************************************ 00:07:27.858 08:20:19 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ jfz81g0auyvm6kwq06q8gbd6sz44ban2guvuutf7yled7tjfjftqvpy803vf8kpklz2b8wqaq9qo0dzh9gw9zrel8hd1i5ouhx9w5fnr4q2s6m252znkd0q2roxmygagvokswr4p5t1l8ktq9o174tq0zj77wqm4fiyk1c0z87ra9n1re42jyirii29geme45ku7i0l2m5pc9i8xknndk7r9eg7kjf9bwcvgjo8nghwivafh3dynitv2nug6z0dg7d09dkd3xel7qdruvpr6i9bvvud7ma89wxjvsnbkul35uytfqnia03vtib06utn8sh7ehhac3va9tja0gl30do1qta3yuz40engmz4zv7wiua4ve4cubkpnq3zmikyf61u8vsj6xgv2dtrwriho3y8w6jb6pkzzw6fwlzr2xknkkbv9q67vp7cymuzi5zi1grfy8i9umptvq9yvwx8ze3nlx4cuxscdy3qcf8uk76o2pza04etoze1vurscp95m0 == \j\f\z\8\1\g\0\a\u\y\v\m\6\k\w\q\0\6\q\8\g\b\d\6\s\z\4\4\b\a\n\2\g\u\v\u\u\t\f\7\y\l\e\d\7\t\j\f\j\f\t\q\v\p\y\8\0\3\v\f\8\k\p\k\l\z\2\b\8\w\q\a\q\9\q\o\0\d\z\h\9\g\w\9\z\r\e\l\8\h\d\1\i\5\o\u\h\x\9\w\5\f\n\r\4\q\2\s\6\m\2\5\2\z\n\k\d\0\q\2\r\o\x\m\y\g\a\g\v\o\k\s\w\r\4\p\5\t\1\l\8\k\t\q\9\o\1\7\4\t\q\0\z\j\7\7\w\q\m\4\f\i\y\k\1\c\0\z\8\7\r\a\9\n\1\r\e\4\2\j\y\i\r\i\i\2\9\g\e\m\e\4\5\k\u\7\i\0\l\2\m\5\p\c\9\i\8\x\k\n\n\d\k\7\r\9\e\g\7\k\j\f\9\b\w\c\v\g\j\o\8\n\g\h\w\i\v\a\f\h\3\d\y\n\i\t\v\2\n\u\g\6\z\0\d\g\7\d\0\9\d\k\d\3\x\e\l\7\q\d\r\u\v\p\r\6\i\9\b\v\v\u\d\7\m\a\8\9\w\x\j\v\s\n\b\k\u\l\3\5\u\y\t\f\q\n\i\a\0\3\v\t\i\b\0\6\u\t\n\8\s\h\7\e\h\h\a\c\3\v\a\9\t\j\a\0\g\l\3\0\d\o\1\q\t\a\3\y\u\z\4\0\e\n\g\m\z\4\z\v\7\w\i\u\a\4\v\e\4\c\u\b\k\p\n\q\3\z\m\i\k\y\f\6\1\u\8\v\s\j\6\x\g\v\2\d\t\r\w\r\i\h\o\3\y\8\w\6\j\b\6\p\k\z\z\w\6\f\w\l\z\r\2\x\k\n\k\k\b\v\9\q\6\7\v\p\7\c\y\m\u\z\i\5\z\i\1\g\r\f\y\8\i\9\u\m\p\t\v\q\9\y\v\w\x\8\z\e\3\n\l\x\4\c\u\x\s\c\d\y\3\q\c\f\8\u\k\7\6\o\2\p\z\a\0\4\e\t\o\z\e\1\v\u\r\s\c\p\9\5\m\0 ]] 00:07:27.858 00:07:27.858 real 0m5.150s 00:07:27.858 user 0m3.019s 00:07:27.858 sys 0m1.146s 00:07:27.858 08:20:19 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:27.858 08:20:19 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:27.858 08:20:19 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:07:27.858 08:20:19 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:07:27.858 08:20:19 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:27.858 08:20:19 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:27.858 00:07:27.858 real 0m22.806s 00:07:27.858 user 0m12.051s 00:07:27.858 sys 0m6.549s 00:07:27.858 08:20:19 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:27.858 08:20:19 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:27.858 ************************************ 00:07:27.858 END TEST spdk_dd_posix 00:07:27.858 ************************************ 00:07:28.117 08:20:20 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:07:28.117 08:20:20 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:07:28.117 08:20:20 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:28.117 08:20:20 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:28.117 08:20:20 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:28.117 ************************************ 00:07:28.117 START TEST spdk_dd_malloc 00:07:28.117 ************************************ 00:07:28.117 08:20:20 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:07:28.117 * Looking for test storage... 00:07:28.117 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:28.117 08:20:20 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:28.117 08:20:20 spdk_dd.spdk_dd_malloc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:28.117 08:20:20 spdk_dd.spdk_dd_malloc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:28.117 08:20:20 spdk_dd.spdk_dd_malloc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:28.117 08:20:20 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.117 08:20:20 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.118 08:20:20 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.118 08:20:20 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:07:28.118 08:20:20 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.118 08:20:20 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:07:28.118 08:20:20 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:28.118 08:20:20 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:28.118 08:20:20 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:07:28.118 ************************************ 00:07:28.118 START TEST dd_malloc_copy 00:07:28.118 ************************************ 00:07:28.118 08:20:20 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1123 -- # malloc_copy 00:07:28.118 08:20:20 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:07:28.118 08:20:20 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:07:28.118 08:20:20 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:07:28.118 08:20:20 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:07:28.118 08:20:20 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:07:28.118 08:20:20 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:07:28.118 08:20:20 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:07:28.118 08:20:20 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:07:28.118 08:20:20 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:28.118 08:20:20 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:07:28.118 [2024-07-15 08:20:20.193841] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:28.118 [2024-07-15 08:20:20.194187] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63701 ] 00:07:28.118 { 00:07:28.118 "subsystems": [ 00:07:28.118 { 00:07:28.118 "subsystem": "bdev", 00:07:28.118 "config": [ 00:07:28.118 { 00:07:28.118 "params": { 00:07:28.118 "block_size": 512, 00:07:28.118 "num_blocks": 1048576, 00:07:28.118 "name": "malloc0" 00:07:28.118 }, 00:07:28.118 "method": "bdev_malloc_create" 00:07:28.118 }, 00:07:28.118 { 00:07:28.118 "params": { 00:07:28.118 "block_size": 512, 00:07:28.118 "num_blocks": 1048576, 00:07:28.118 "name": "malloc1" 00:07:28.118 }, 00:07:28.118 "method": "bdev_malloc_create" 00:07:28.118 }, 00:07:28.118 { 00:07:28.118 "method": "bdev_wait_for_examine" 00:07:28.118 } 00:07:28.118 ] 00:07:28.118 } 00:07:28.118 ] 00:07:28.118 } 00:07:28.377 [2024-07-15 08:20:20.332234] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.377 [2024-07-15 08:20:20.448171] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.377 [2024-07-15 08:20:20.500761] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:31.816  Copying: 199/512 [MB] (199 MBps) Copying: 400/512 [MB] (200 MBps) Copying: 512/512 [MB] (average 200 MBps) 00:07:31.817 00:07:31.817 08:20:23 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:07:31.817 08:20:23 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:07:31.817 08:20:23 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:31.817 08:20:23 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:07:32.075 [2024-07-15 08:20:24.026618] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:32.075 [2024-07-15 08:20:24.027244] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63743 ] 00:07:32.075 { 00:07:32.075 "subsystems": [ 00:07:32.075 { 00:07:32.075 "subsystem": "bdev", 00:07:32.075 "config": [ 00:07:32.075 { 00:07:32.075 "params": { 00:07:32.075 "block_size": 512, 00:07:32.075 "num_blocks": 1048576, 00:07:32.075 "name": "malloc0" 00:07:32.075 }, 00:07:32.075 "method": "bdev_malloc_create" 00:07:32.075 }, 00:07:32.075 { 00:07:32.075 "params": { 00:07:32.075 "block_size": 512, 00:07:32.075 "num_blocks": 1048576, 00:07:32.075 "name": "malloc1" 00:07:32.075 }, 00:07:32.075 "method": "bdev_malloc_create" 00:07:32.075 }, 00:07:32.075 { 00:07:32.075 "method": "bdev_wait_for_examine" 00:07:32.075 } 00:07:32.075 ] 00:07:32.075 } 00:07:32.075 ] 00:07:32.075 } 00:07:32.075 [2024-07-15 08:20:24.163574] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.333 [2024-07-15 08:20:24.291583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.333 [2024-07-15 08:20:24.344401] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:35.774  Copying: 200/512 [MB] (200 MBps) Copying: 402/512 [MB] (202 MBps) Copying: 512/512 [MB] (average 201 MBps) 00:07:35.774 00:07:35.774 00:07:35.774 real 0m7.654s 00:07:35.774 user 0m6.672s 00:07:35.774 sys 0m0.823s 00:07:35.774 ************************************ 00:07:35.774 END TEST dd_malloc_copy 00:07:35.774 ************************************ 00:07:35.774 08:20:27 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:35.774 08:20:27 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:07:35.774 08:20:27 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1142 -- # return 0 00:07:35.774 ************************************ 00:07:35.774 END TEST spdk_dd_malloc 00:07:35.774 ************************************ 00:07:35.774 00:07:35.774 real 0m7.790s 00:07:35.774 user 0m6.722s 00:07:35.774 sys 0m0.911s 00:07:35.774 08:20:27 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:35.774 08:20:27 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:07:35.774 08:20:27 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:07:35.774 08:20:27 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:07:35.774 08:20:27 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:35.774 08:20:27 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:35.774 08:20:27 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:35.774 ************************************ 00:07:35.774 START TEST spdk_dd_bdev_to_bdev 00:07:35.774 ************************************ 00:07:35.774 08:20:27 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:07:36.033 * Looking for test storage... 00:07:36.033 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:36.033 08:20:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:36.033 08:20:27 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:36.033 08:20:27 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:36.033 08:20:27 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:36.033 08:20:27 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.033 08:20:27 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.033 08:20:27 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.033 08:20:27 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:07:36.033 08:20:27 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.033 08:20:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:07:36.033 08:20:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:07:36.033 08:20:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:07:36.033 08:20:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:07:36.033 08:20:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:07:36.033 08:20:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:07:36.033 08:20:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:07:36.033 08:20:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:07:36.033 08:20:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:07:36.033 08:20:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:07:36.033 08:20:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:07:36.033 08:20:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:07:36.033 08:20:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:07:36.033 08:20:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:07:36.033 08:20:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:36.033 08:20:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:36.033 08:20:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:07:36.033 08:20:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:07:36.033 08:20:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:07:36.033 08:20:27 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:36.033 08:20:27 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:36.033 08:20:27 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:36.033 ************************************ 00:07:36.033 START TEST dd_inflate_file 00:07:36.033 ************************************ 00:07:36.033 08:20:27 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:07:36.033 [2024-07-15 08:20:28.027971] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:36.033 [2024-07-15 08:20:28.028078] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63853 ] 00:07:36.033 [2024-07-15 08:20:28.165635] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.292 [2024-07-15 08:20:28.282137] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.292 [2024-07-15 08:20:28.334214] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:36.550  Copying: 64/64 [MB] (average 1560 MBps) 00:07:36.550 00:07:36.550 00:07:36.550 real 0m0.646s 00:07:36.550 user 0m0.393s 00:07:36.550 sys 0m0.308s 00:07:36.550 08:20:28 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:36.550 08:20:28 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:07:36.550 ************************************ 00:07:36.550 END TEST dd_inflate_file 00:07:36.550 ************************************ 00:07:36.550 08:20:28 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 00:07:36.550 08:20:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:07:36.550 08:20:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:07:36.550 08:20:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:07:36.550 08:20:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:07:36.550 08:20:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:36.551 08:20:28 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:36.551 08:20:28 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:36.551 08:20:28 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:36.551 08:20:28 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:36.551 ************************************ 00:07:36.551 START TEST dd_copy_to_out_bdev 00:07:36.551 ************************************ 00:07:36.551 08:20:28 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:07:36.551 [2024-07-15 08:20:28.720151] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:36.551 [2024-07-15 08:20:28.720822] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63892 ] 00:07:36.809 { 00:07:36.809 "subsystems": [ 00:07:36.809 { 00:07:36.809 "subsystem": "bdev", 00:07:36.809 "config": [ 00:07:36.809 { 00:07:36.809 "params": { 00:07:36.809 "trtype": "pcie", 00:07:36.809 "traddr": "0000:00:10.0", 00:07:36.809 "name": "Nvme0" 00:07:36.809 }, 00:07:36.809 "method": "bdev_nvme_attach_controller" 00:07:36.809 }, 00:07:36.809 { 00:07:36.809 "params": { 00:07:36.809 "trtype": "pcie", 00:07:36.809 "traddr": "0000:00:11.0", 00:07:36.809 "name": "Nvme1" 00:07:36.809 }, 00:07:36.809 "method": "bdev_nvme_attach_controller" 00:07:36.809 }, 00:07:36.809 { 00:07:36.809 "method": "bdev_wait_for_examine" 00:07:36.809 } 00:07:36.809 ] 00:07:36.809 } 00:07:36.809 ] 00:07:36.809 } 00:07:36.809 [2024-07-15 08:20:28.858856] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.067 [2024-07-15 08:20:28.988922] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.068 [2024-07-15 08:20:29.045170] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:38.703  Copying: 55/64 [MB] (55 MBps) Copying: 64/64 [MB] (average 55 MBps) 00:07:38.703 00:07:38.703 00:07:38.703 real 0m1.980s 00:07:38.703 user 0m1.730s 00:07:38.703 sys 0m1.507s 00:07:38.703 08:20:30 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:38.703 ************************************ 00:07:38.703 END TEST dd_copy_to_out_bdev 00:07:38.703 ************************************ 00:07:38.703 08:20:30 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:38.703 08:20:30 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 00:07:38.703 08:20:30 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:07:38.703 08:20:30 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:07:38.703 08:20:30 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:38.703 08:20:30 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:38.703 08:20:30 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:38.703 ************************************ 00:07:38.703 START TEST dd_offset_magic 00:07:38.703 ************************************ 00:07:38.703 08:20:30 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1123 -- # offset_magic 00:07:38.703 08:20:30 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:07:38.703 08:20:30 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:07:38.703 08:20:30 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:07:38.703 08:20:30 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:07:38.703 08:20:30 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:07:38.703 08:20:30 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:07:38.703 08:20:30 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:38.703 08:20:30 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:38.703 [2024-07-15 08:20:30.761523] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:38.703 [2024-07-15 08:20:30.761629] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63937 ] 00:07:38.703 { 00:07:38.703 "subsystems": [ 00:07:38.703 { 00:07:38.703 "subsystem": "bdev", 00:07:38.703 "config": [ 00:07:38.703 { 00:07:38.703 "params": { 00:07:38.703 "trtype": "pcie", 00:07:38.703 "traddr": "0000:00:10.0", 00:07:38.703 "name": "Nvme0" 00:07:38.703 }, 00:07:38.703 "method": "bdev_nvme_attach_controller" 00:07:38.703 }, 00:07:38.703 { 00:07:38.703 "params": { 00:07:38.703 "trtype": "pcie", 00:07:38.703 "traddr": "0000:00:11.0", 00:07:38.703 "name": "Nvme1" 00:07:38.703 }, 00:07:38.703 "method": "bdev_nvme_attach_controller" 00:07:38.703 }, 00:07:38.703 { 00:07:38.703 "method": "bdev_wait_for_examine" 00:07:38.703 } 00:07:38.703 ] 00:07:38.703 } 00:07:38.703 ] 00:07:38.703 } 00:07:38.961 [2024-07-15 08:20:30.901613] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.962 [2024-07-15 08:20:31.032278] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.962 [2024-07-15 08:20:31.090210] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:39.540  Copying: 65/65 [MB] (average 984 MBps) 00:07:39.540 00:07:39.540 08:20:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:07:39.540 08:20:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:07:39.540 08:20:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:39.540 08:20:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:39.540 [2024-07-15 08:20:31.645632] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:39.540 [2024-07-15 08:20:31.645748] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63952 ] 00:07:39.540 { 00:07:39.540 "subsystems": [ 00:07:39.540 { 00:07:39.540 "subsystem": "bdev", 00:07:39.540 "config": [ 00:07:39.540 { 00:07:39.540 "params": { 00:07:39.540 "trtype": "pcie", 00:07:39.540 "traddr": "0000:00:10.0", 00:07:39.540 "name": "Nvme0" 00:07:39.540 }, 00:07:39.540 "method": "bdev_nvme_attach_controller" 00:07:39.540 }, 00:07:39.540 { 00:07:39.540 "params": { 00:07:39.540 "trtype": "pcie", 00:07:39.540 "traddr": "0000:00:11.0", 00:07:39.540 "name": "Nvme1" 00:07:39.540 }, 00:07:39.540 "method": "bdev_nvme_attach_controller" 00:07:39.540 }, 00:07:39.540 { 00:07:39.540 "method": "bdev_wait_for_examine" 00:07:39.540 } 00:07:39.540 ] 00:07:39.540 } 00:07:39.540 ] 00:07:39.540 } 00:07:39.799 [2024-07-15 08:20:31.786807] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.799 [2024-07-15 08:20:31.914858] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.058 [2024-07-15 08:20:31.971967] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:40.317  Copying: 1024/1024 [kB] (average 500 MBps) 00:07:40.317 00:07:40.317 08:20:32 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:07:40.317 08:20:32 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:07:40.317 08:20:32 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:07:40.317 08:20:32 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:07:40.317 08:20:32 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:07:40.317 08:20:32 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:40.317 08:20:32 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:40.317 [2024-07-15 08:20:32.429870] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:40.317 { 00:07:40.317 "subsystems": [ 00:07:40.317 { 00:07:40.318 "subsystem": "bdev", 00:07:40.318 "config": [ 00:07:40.318 { 00:07:40.318 "params": { 00:07:40.318 "trtype": "pcie", 00:07:40.318 "traddr": "0000:00:10.0", 00:07:40.318 "name": "Nvme0" 00:07:40.318 }, 00:07:40.318 "method": "bdev_nvme_attach_controller" 00:07:40.318 }, 00:07:40.318 { 00:07:40.318 "params": { 00:07:40.318 "trtype": "pcie", 00:07:40.318 "traddr": "0000:00:11.0", 00:07:40.318 "name": "Nvme1" 00:07:40.318 }, 00:07:40.318 "method": "bdev_nvme_attach_controller" 00:07:40.318 }, 00:07:40.318 { 00:07:40.318 "method": "bdev_wait_for_examine" 00:07:40.318 } 00:07:40.318 ] 00:07:40.318 } 00:07:40.318 ] 00:07:40.318 } 00:07:40.318 [2024-07-15 08:20:32.430702] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63974 ] 00:07:40.576 [2024-07-15 08:20:32.563479] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.576 [2024-07-15 08:20:32.680684] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.576 [2024-07-15 08:20:32.735136] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:41.093  Copying: 65/65 [MB] (average 1140 MBps) 00:07:41.093 00:07:41.093 08:20:33 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:07:41.093 08:20:33 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:07:41.093 08:20:33 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:41.093 08:20:33 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:41.352 [2024-07-15 08:20:33.278107] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:41.352 [2024-07-15 08:20:33.278206] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63988 ] 00:07:41.352 { 00:07:41.352 "subsystems": [ 00:07:41.352 { 00:07:41.352 "subsystem": "bdev", 00:07:41.352 "config": [ 00:07:41.352 { 00:07:41.352 "params": { 00:07:41.352 "trtype": "pcie", 00:07:41.352 "traddr": "0000:00:10.0", 00:07:41.352 "name": "Nvme0" 00:07:41.352 }, 00:07:41.352 "method": "bdev_nvme_attach_controller" 00:07:41.352 }, 00:07:41.352 { 00:07:41.352 "params": { 00:07:41.352 "trtype": "pcie", 00:07:41.352 "traddr": "0000:00:11.0", 00:07:41.352 "name": "Nvme1" 00:07:41.352 }, 00:07:41.352 "method": "bdev_nvme_attach_controller" 00:07:41.352 }, 00:07:41.352 { 00:07:41.352 "method": "bdev_wait_for_examine" 00:07:41.352 } 00:07:41.352 ] 00:07:41.352 } 00:07:41.352 ] 00:07:41.352 } 00:07:41.352 [2024-07-15 08:20:33.414809] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.611 [2024-07-15 08:20:33.529297] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.611 [2024-07-15 08:20:33.582114] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:41.870  Copying: 1024/1024 [kB] (average 500 MBps) 00:07:41.870 00:07:41.870 08:20:33 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:07:41.870 08:20:33 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:07:41.870 00:07:41.870 real 0m3.266s 00:07:41.870 user 0m2.379s 00:07:41.870 sys 0m0.944s 00:07:41.870 08:20:33 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:41.870 ************************************ 00:07:41.870 END TEST dd_offset_magic 00:07:41.870 ************************************ 00:07:41.870 08:20:33 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:41.870 08:20:34 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 00:07:41.870 08:20:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:07:41.870 08:20:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:07:41.870 08:20:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:41.870 08:20:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:07:41.870 08:20:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:07:41.870 08:20:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:07:41.870 08:20:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:07:41.870 08:20:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:07:41.870 08:20:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:07:41.870 08:20:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:41.870 08:20:34 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:42.129 [2024-07-15 08:20:34.063382] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:42.129 [2024-07-15 08:20:34.063478] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64025 ] 00:07:42.129 { 00:07:42.129 "subsystems": [ 00:07:42.129 { 00:07:42.129 "subsystem": "bdev", 00:07:42.129 "config": [ 00:07:42.129 { 00:07:42.129 "params": { 00:07:42.129 "trtype": "pcie", 00:07:42.129 "traddr": "0000:00:10.0", 00:07:42.129 "name": "Nvme0" 00:07:42.129 }, 00:07:42.129 "method": "bdev_nvme_attach_controller" 00:07:42.129 }, 00:07:42.129 { 00:07:42.129 "params": { 00:07:42.129 "trtype": "pcie", 00:07:42.129 "traddr": "0000:00:11.0", 00:07:42.129 "name": "Nvme1" 00:07:42.129 }, 00:07:42.129 "method": "bdev_nvme_attach_controller" 00:07:42.129 }, 00:07:42.129 { 00:07:42.129 "method": "bdev_wait_for_examine" 00:07:42.129 } 00:07:42.129 ] 00:07:42.129 } 00:07:42.129 ] 00:07:42.129 } 00:07:42.129 [2024-07-15 08:20:34.201817] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.386 [2024-07-15 08:20:34.317424] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.386 [2024-07-15 08:20:34.370327] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:42.644  Copying: 5120/5120 [kB] (average 1250 MBps) 00:07:42.644 00:07:42.644 08:20:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:07:42.644 08:20:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:07:42.644 08:20:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:07:42.644 08:20:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:07:42.644 08:20:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:07:42.644 08:20:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:07:42.644 08:20:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:07:42.644 08:20:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:07:42.644 08:20:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:42.644 08:20:34 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:42.902 [2024-07-15 08:20:34.818049] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:42.902 [2024-07-15 08:20:34.818151] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64041 ] 00:07:42.902 { 00:07:42.902 "subsystems": [ 00:07:42.902 { 00:07:42.902 "subsystem": "bdev", 00:07:42.902 "config": [ 00:07:42.902 { 00:07:42.902 "params": { 00:07:42.902 "trtype": "pcie", 00:07:42.902 "traddr": "0000:00:10.0", 00:07:42.902 "name": "Nvme0" 00:07:42.902 }, 00:07:42.902 "method": "bdev_nvme_attach_controller" 00:07:42.902 }, 00:07:42.902 { 00:07:42.902 "params": { 00:07:42.902 "trtype": "pcie", 00:07:42.902 "traddr": "0000:00:11.0", 00:07:42.902 "name": "Nvme1" 00:07:42.902 }, 00:07:42.902 "method": "bdev_nvme_attach_controller" 00:07:42.902 }, 00:07:42.902 { 00:07:42.902 "method": "bdev_wait_for_examine" 00:07:42.902 } 00:07:42.902 ] 00:07:42.902 } 00:07:42.902 ] 00:07:42.902 } 00:07:42.902 [2024-07-15 08:20:34.957975] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.160 [2024-07-15 08:20:35.074789] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.160 [2024-07-15 08:20:35.127970] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:43.417  Copying: 5120/5120 [kB] (average 1000 MBps) 00:07:43.417 00:07:43.418 08:20:35 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:07:43.418 ************************************ 00:07:43.418 END TEST spdk_dd_bdev_to_bdev 00:07:43.418 ************************************ 00:07:43.418 00:07:43.418 real 0m7.672s 00:07:43.418 user 0m5.691s 00:07:43.418 sys 0m3.442s 00:07:43.418 08:20:35 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:43.418 08:20:35 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:43.675 08:20:35 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:07:43.675 08:20:35 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:07:43.675 08:20:35 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:07:43.675 08:20:35 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:43.675 08:20:35 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.675 08:20:35 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:43.675 ************************************ 00:07:43.675 START TEST spdk_dd_uring 00:07:43.675 ************************************ 00:07:43.675 08:20:35 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:07:43.675 * Looking for test storage... 00:07:43.675 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:43.675 08:20:35 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:43.675 08:20:35 spdk_dd.spdk_dd_uring -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:43.675 08:20:35 spdk_dd.spdk_dd_uring -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:43.675 08:20:35 spdk_dd.spdk_dd_uring -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:43.675 08:20:35 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.675 08:20:35 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.675 08:20:35 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.675 08:20:35 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:07:43.675 08:20:35 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.675 08:20:35 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:07:43.675 08:20:35 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:43.675 08:20:35 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.675 08:20:35 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:07:43.675 ************************************ 00:07:43.675 START TEST dd_uring_copy 00:07:43.675 ************************************ 00:07:43.675 08:20:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1123 -- # uring_zram_copy 00:07:43.675 08:20:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:07:43.675 08:20:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:07:43.675 08:20:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:07:43.675 08:20:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:43.675 08:20:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:07:43.675 08:20:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:07:43.675 08:20:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@163 -- # [[ -e /sys/class/zram-control ]] 00:07:43.675 08:20:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # return 00:07:43.675 08:20:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:07:43.675 08:20:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # cat /sys/class/zram-control/hot_add 00:07:43.675 08:20:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:07:43.675 08:20:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:07:43.675 08:20:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@181 -- # local id=1 00:07:43.675 08:20:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # local size=512M 00:07:43.675 08:20:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@184 -- # [[ -e /sys/block/zram1 ]] 00:07:43.675 08:20:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@186 -- # echo 512M 00:07:43.675 08:20:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:07:43.675 08:20:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:07:43.675 08:20:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:07:43.675 08:20:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:07:43.675 08:20:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:07:43.675 08:20:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:07:43.675 08:20:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:07:43.675 08:20:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:07:43.675 08:20:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:43.675 08:20:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=jtim7ut5xajt4nqgf1zeymnt8qc2i4pfmnh0zza7nncmg8wlmypu1s8ihcezdbz7yo3utdyt0ut1e874ncqb8n738axgp7zonihedno4hmi793ugsn6yjwki7fo5m90auzmkgyh8vj4n3snnl5n19eyc1b1kc72osb6376lv0kmm7uktbn6wynl1g8jiq470apg4pb09ag0tv26jrbs02jjy5mbdptnd2cgsabsjxqdrwtbzzgilt92y65172qf599asno7e7zn9ry3cackoyol7k1a42qoq35c6oc4hcgu5vsbplblzrn7inib2to761rt8p4ggb918937nb7ynybfxbk0i964xkv9b0mwggkuiitng500sr41wle6boucy3byeor491slushqade1lrxcx6zfqym6pm403et3swho8xtigshpdnd5uums8i2g5ldrasih56vdmdk0sdweavqk3bssqexd5p62nipbjbwmjqxvsta2952la36p67nxnodv2fa9zjnckck9a13zx2zdhdou5yzw3xoga0gkbrkeaue51s7bop26n621r8dru74kadctnhi9tuznib08xljuto9c56v5l26rpskg1ddv7njalkldipso4ozapxfpsztf24367gsxj363ukl6t79scfo45zr8zj4hgl2lpwy8irma4rjo76wsgngrq7je2v7tu0tx915hr0thxadu9ubs7lopy0d9ug1eyc5if7pbo8f57fetb2smk9kyzrl84xbs9xi68541t2yck83s4bnk991twr6o80xqnwwu61js4eexw8e0dyqgqda2cpkrvw59ij3ms7heapouixzkajz1vmxfah059p39iswkqbzf9gt1cjcx9n8c80afarjkym6fte307t7kdg8h8ufb5pv9tnmg7x4kkmcztcf4rnr8nmmqo67thnya2f5cw27k8ql1ggkwxzif2n85153jsgegzq7zdmxknefg6hu5sa4s8srkghehd4lj3xitits01 00:07:43.676 08:20:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo jtim7ut5xajt4nqgf1zeymnt8qc2i4pfmnh0zza7nncmg8wlmypu1s8ihcezdbz7yo3utdyt0ut1e874ncqb8n738axgp7zonihedno4hmi793ugsn6yjwki7fo5m90auzmkgyh8vj4n3snnl5n19eyc1b1kc72osb6376lv0kmm7uktbn6wynl1g8jiq470apg4pb09ag0tv26jrbs02jjy5mbdptnd2cgsabsjxqdrwtbzzgilt92y65172qf599asno7e7zn9ry3cackoyol7k1a42qoq35c6oc4hcgu5vsbplblzrn7inib2to761rt8p4ggb918937nb7ynybfxbk0i964xkv9b0mwggkuiitng500sr41wle6boucy3byeor491slushqade1lrxcx6zfqym6pm403et3swho8xtigshpdnd5uums8i2g5ldrasih56vdmdk0sdweavqk3bssqexd5p62nipbjbwmjqxvsta2952la36p67nxnodv2fa9zjnckck9a13zx2zdhdou5yzw3xoga0gkbrkeaue51s7bop26n621r8dru74kadctnhi9tuznib08xljuto9c56v5l26rpskg1ddv7njalkldipso4ozapxfpsztf24367gsxj363ukl6t79scfo45zr8zj4hgl2lpwy8irma4rjo76wsgngrq7je2v7tu0tx915hr0thxadu9ubs7lopy0d9ug1eyc5if7pbo8f57fetb2smk9kyzrl84xbs9xi68541t2yck83s4bnk991twr6o80xqnwwu61js4eexw8e0dyqgqda2cpkrvw59ij3ms7heapouixzkajz1vmxfah059p39iswkqbzf9gt1cjcx9n8c80afarjkym6fte307t7kdg8h8ufb5pv9tnmg7x4kkmcztcf4rnr8nmmqo67thnya2f5cw27k8ql1ggkwxzif2n85153jsgegzq7zdmxknefg6hu5sa4s8srkghehd4lj3xitits01 00:07:43.676 08:20:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:07:43.676 [2024-07-15 08:20:35.781315] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:43.676 [2024-07-15 08:20:35.781420] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64111 ] 00:07:43.933 [2024-07-15 08:20:35.921430] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.933 [2024-07-15 08:20:36.042002] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.933 [2024-07-15 08:20:36.095115] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:45.143  Copying: 511/511 [MB] (average 1094 MBps) 00:07:45.143 00:07:45.143 08:20:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:07:45.143 08:20:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:07:45.143 08:20:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:45.143 08:20:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:45.144 { 00:07:45.144 "subsystems": [ 00:07:45.144 { 00:07:45.144 "subsystem": "bdev", 00:07:45.144 "config": [ 00:07:45.144 { 00:07:45.144 "params": { 00:07:45.144 "block_size": 512, 00:07:45.144 "num_blocks": 1048576, 00:07:45.144 "name": "malloc0" 00:07:45.144 }, 00:07:45.144 "method": "bdev_malloc_create" 00:07:45.144 }, 00:07:45.144 { 00:07:45.144 "params": { 00:07:45.144 "filename": "/dev/zram1", 00:07:45.144 "name": "uring0" 00:07:45.144 }, 00:07:45.144 "method": "bdev_uring_create" 00:07:45.144 }, 00:07:45.144 { 00:07:45.144 "method": "bdev_wait_for_examine" 00:07:45.144 } 00:07:45.144 ] 00:07:45.144 } 00:07:45.144 ] 00:07:45.144 } 00:07:45.144 [2024-07-15 08:20:37.244378] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:45.144 [2024-07-15 08:20:37.244729] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64132 ] 00:07:45.402 [2024-07-15 08:20:37.384414] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.402 [2024-07-15 08:20:37.506673] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.402 [2024-07-15 08:20:37.561844] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:48.550  Copying: 209/512 [MB] (209 MBps) Copying: 422/512 [MB] (212 MBps) Copying: 512/512 [MB] (average 211 MBps) 00:07:48.550 00:07:48.550 08:20:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:07:48.550 08:20:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:07:48.550 08:20:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:48.550 08:20:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:48.550 [2024-07-15 08:20:40.637421] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:48.550 [2024-07-15 08:20:40.637518] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64183 ] 00:07:48.550 { 00:07:48.550 "subsystems": [ 00:07:48.550 { 00:07:48.550 "subsystem": "bdev", 00:07:48.550 "config": [ 00:07:48.550 { 00:07:48.550 "params": { 00:07:48.550 "block_size": 512, 00:07:48.550 "num_blocks": 1048576, 00:07:48.550 "name": "malloc0" 00:07:48.550 }, 00:07:48.550 "method": "bdev_malloc_create" 00:07:48.550 }, 00:07:48.550 { 00:07:48.550 "params": { 00:07:48.550 "filename": "/dev/zram1", 00:07:48.550 "name": "uring0" 00:07:48.550 }, 00:07:48.550 "method": "bdev_uring_create" 00:07:48.550 }, 00:07:48.550 { 00:07:48.550 "method": "bdev_wait_for_examine" 00:07:48.550 } 00:07:48.550 ] 00:07:48.550 } 00:07:48.550 ] 00:07:48.550 } 00:07:48.816 [2024-07-15 08:20:40.773513] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.816 [2024-07-15 08:20:40.890572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.816 [2024-07-15 08:20:40.944070] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:52.337  Copying: 187/512 [MB] (187 MBps) Copying: 362/512 [MB] (175 MBps) Copying: 512/512 [MB] (average 183 MBps) 00:07:52.337 00:07:52.337 08:20:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:07:52.338 08:20:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ jtim7ut5xajt4nqgf1zeymnt8qc2i4pfmnh0zza7nncmg8wlmypu1s8ihcezdbz7yo3utdyt0ut1e874ncqb8n738axgp7zonihedno4hmi793ugsn6yjwki7fo5m90auzmkgyh8vj4n3snnl5n19eyc1b1kc72osb6376lv0kmm7uktbn6wynl1g8jiq470apg4pb09ag0tv26jrbs02jjy5mbdptnd2cgsabsjxqdrwtbzzgilt92y65172qf599asno7e7zn9ry3cackoyol7k1a42qoq35c6oc4hcgu5vsbplblzrn7inib2to761rt8p4ggb918937nb7ynybfxbk0i964xkv9b0mwggkuiitng500sr41wle6boucy3byeor491slushqade1lrxcx6zfqym6pm403et3swho8xtigshpdnd5uums8i2g5ldrasih56vdmdk0sdweavqk3bssqexd5p62nipbjbwmjqxvsta2952la36p67nxnodv2fa9zjnckck9a13zx2zdhdou5yzw3xoga0gkbrkeaue51s7bop26n621r8dru74kadctnhi9tuznib08xljuto9c56v5l26rpskg1ddv7njalkldipso4ozapxfpsztf24367gsxj363ukl6t79scfo45zr8zj4hgl2lpwy8irma4rjo76wsgngrq7je2v7tu0tx915hr0thxadu9ubs7lopy0d9ug1eyc5if7pbo8f57fetb2smk9kyzrl84xbs9xi68541t2yck83s4bnk991twr6o80xqnwwu61js4eexw8e0dyqgqda2cpkrvw59ij3ms7heapouixzkajz1vmxfah059p39iswkqbzf9gt1cjcx9n8c80afarjkym6fte307t7kdg8h8ufb5pv9tnmg7x4kkmcztcf4rnr8nmmqo67thnya2f5cw27k8ql1ggkwxzif2n85153jsgegzq7zdmxknefg6hu5sa4s8srkghehd4lj3xitits01 == \j\t\i\m\7\u\t\5\x\a\j\t\4\n\q\g\f\1\z\e\y\m\n\t\8\q\c\2\i\4\p\f\m\n\h\0\z\z\a\7\n\n\c\m\g\8\w\l\m\y\p\u\1\s\8\i\h\c\e\z\d\b\z\7\y\o\3\u\t\d\y\t\0\u\t\1\e\8\7\4\n\c\q\b\8\n\7\3\8\a\x\g\p\7\z\o\n\i\h\e\d\n\o\4\h\m\i\7\9\3\u\g\s\n\6\y\j\w\k\i\7\f\o\5\m\9\0\a\u\z\m\k\g\y\h\8\v\j\4\n\3\s\n\n\l\5\n\1\9\e\y\c\1\b\1\k\c\7\2\o\s\b\6\3\7\6\l\v\0\k\m\m\7\u\k\t\b\n\6\w\y\n\l\1\g\8\j\i\q\4\7\0\a\p\g\4\p\b\0\9\a\g\0\t\v\2\6\j\r\b\s\0\2\j\j\y\5\m\b\d\p\t\n\d\2\c\g\s\a\b\s\j\x\q\d\r\w\t\b\z\z\g\i\l\t\9\2\y\6\5\1\7\2\q\f\5\9\9\a\s\n\o\7\e\7\z\n\9\r\y\3\c\a\c\k\o\y\o\l\7\k\1\a\4\2\q\o\q\3\5\c\6\o\c\4\h\c\g\u\5\v\s\b\p\l\b\l\z\r\n\7\i\n\i\b\2\t\o\7\6\1\r\t\8\p\4\g\g\b\9\1\8\9\3\7\n\b\7\y\n\y\b\f\x\b\k\0\i\9\6\4\x\k\v\9\b\0\m\w\g\g\k\u\i\i\t\n\g\5\0\0\s\r\4\1\w\l\e\6\b\o\u\c\y\3\b\y\e\o\r\4\9\1\s\l\u\s\h\q\a\d\e\1\l\r\x\c\x\6\z\f\q\y\m\6\p\m\4\0\3\e\t\3\s\w\h\o\8\x\t\i\g\s\h\p\d\n\d\5\u\u\m\s\8\i\2\g\5\l\d\r\a\s\i\h\5\6\v\d\m\d\k\0\s\d\w\e\a\v\q\k\3\b\s\s\q\e\x\d\5\p\6\2\n\i\p\b\j\b\w\m\j\q\x\v\s\t\a\2\9\5\2\l\a\3\6\p\6\7\n\x\n\o\d\v\2\f\a\9\z\j\n\c\k\c\k\9\a\1\3\z\x\2\z\d\h\d\o\u\5\y\z\w\3\x\o\g\a\0\g\k\b\r\k\e\a\u\e\5\1\s\7\b\o\p\2\6\n\6\2\1\r\8\d\r\u\7\4\k\a\d\c\t\n\h\i\9\t\u\z\n\i\b\0\8\x\l\j\u\t\o\9\c\5\6\v\5\l\2\6\r\p\s\k\g\1\d\d\v\7\n\j\a\l\k\l\d\i\p\s\o\4\o\z\a\p\x\f\p\s\z\t\f\2\4\3\6\7\g\s\x\j\3\6\3\u\k\l\6\t\7\9\s\c\f\o\4\5\z\r\8\z\j\4\h\g\l\2\l\p\w\y\8\i\r\m\a\4\r\j\o\7\6\w\s\g\n\g\r\q\7\j\e\2\v\7\t\u\0\t\x\9\1\5\h\r\0\t\h\x\a\d\u\9\u\b\s\7\l\o\p\y\0\d\9\u\g\1\e\y\c\5\i\f\7\p\b\o\8\f\5\7\f\e\t\b\2\s\m\k\9\k\y\z\r\l\8\4\x\b\s\9\x\i\6\8\5\4\1\t\2\y\c\k\8\3\s\4\b\n\k\9\9\1\t\w\r\6\o\8\0\x\q\n\w\w\u\6\1\j\s\4\e\e\x\w\8\e\0\d\y\q\g\q\d\a\2\c\p\k\r\v\w\5\9\i\j\3\m\s\7\h\e\a\p\o\u\i\x\z\k\a\j\z\1\v\m\x\f\a\h\0\5\9\p\3\9\i\s\w\k\q\b\z\f\9\g\t\1\c\j\c\x\9\n\8\c\8\0\a\f\a\r\j\k\y\m\6\f\t\e\3\0\7\t\7\k\d\g\8\h\8\u\f\b\5\p\v\9\t\n\m\g\7\x\4\k\k\m\c\z\t\c\f\4\r\n\r\8\n\m\m\q\o\6\7\t\h\n\y\a\2\f\5\c\w\2\7\k\8\q\l\1\g\g\k\w\x\z\i\f\2\n\8\5\1\5\3\j\s\g\e\g\z\q\7\z\d\m\x\k\n\e\f\g\6\h\u\5\s\a\4\s\8\s\r\k\g\h\e\h\d\4\l\j\3\x\i\t\i\t\s\0\1 ]] 00:07:52.338 08:20:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:07:52.338 08:20:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ jtim7ut5xajt4nqgf1zeymnt8qc2i4pfmnh0zza7nncmg8wlmypu1s8ihcezdbz7yo3utdyt0ut1e874ncqb8n738axgp7zonihedno4hmi793ugsn6yjwki7fo5m90auzmkgyh8vj4n3snnl5n19eyc1b1kc72osb6376lv0kmm7uktbn6wynl1g8jiq470apg4pb09ag0tv26jrbs02jjy5mbdptnd2cgsabsjxqdrwtbzzgilt92y65172qf599asno7e7zn9ry3cackoyol7k1a42qoq35c6oc4hcgu5vsbplblzrn7inib2to761rt8p4ggb918937nb7ynybfxbk0i964xkv9b0mwggkuiitng500sr41wle6boucy3byeor491slushqade1lrxcx6zfqym6pm403et3swho8xtigshpdnd5uums8i2g5ldrasih56vdmdk0sdweavqk3bssqexd5p62nipbjbwmjqxvsta2952la36p67nxnodv2fa9zjnckck9a13zx2zdhdou5yzw3xoga0gkbrkeaue51s7bop26n621r8dru74kadctnhi9tuznib08xljuto9c56v5l26rpskg1ddv7njalkldipso4ozapxfpsztf24367gsxj363ukl6t79scfo45zr8zj4hgl2lpwy8irma4rjo76wsgngrq7je2v7tu0tx915hr0thxadu9ubs7lopy0d9ug1eyc5if7pbo8f57fetb2smk9kyzrl84xbs9xi68541t2yck83s4bnk991twr6o80xqnwwu61js4eexw8e0dyqgqda2cpkrvw59ij3ms7heapouixzkajz1vmxfah059p39iswkqbzf9gt1cjcx9n8c80afarjkym6fte307t7kdg8h8ufb5pv9tnmg7x4kkmcztcf4rnr8nmmqo67thnya2f5cw27k8ql1ggkwxzif2n85153jsgegzq7zdmxknefg6hu5sa4s8srkghehd4lj3xitits01 == \j\t\i\m\7\u\t\5\x\a\j\t\4\n\q\g\f\1\z\e\y\m\n\t\8\q\c\2\i\4\p\f\m\n\h\0\z\z\a\7\n\n\c\m\g\8\w\l\m\y\p\u\1\s\8\i\h\c\e\z\d\b\z\7\y\o\3\u\t\d\y\t\0\u\t\1\e\8\7\4\n\c\q\b\8\n\7\3\8\a\x\g\p\7\z\o\n\i\h\e\d\n\o\4\h\m\i\7\9\3\u\g\s\n\6\y\j\w\k\i\7\f\o\5\m\9\0\a\u\z\m\k\g\y\h\8\v\j\4\n\3\s\n\n\l\5\n\1\9\e\y\c\1\b\1\k\c\7\2\o\s\b\6\3\7\6\l\v\0\k\m\m\7\u\k\t\b\n\6\w\y\n\l\1\g\8\j\i\q\4\7\0\a\p\g\4\p\b\0\9\a\g\0\t\v\2\6\j\r\b\s\0\2\j\j\y\5\m\b\d\p\t\n\d\2\c\g\s\a\b\s\j\x\q\d\r\w\t\b\z\z\g\i\l\t\9\2\y\6\5\1\7\2\q\f\5\9\9\a\s\n\o\7\e\7\z\n\9\r\y\3\c\a\c\k\o\y\o\l\7\k\1\a\4\2\q\o\q\3\5\c\6\o\c\4\h\c\g\u\5\v\s\b\p\l\b\l\z\r\n\7\i\n\i\b\2\t\o\7\6\1\r\t\8\p\4\g\g\b\9\1\8\9\3\7\n\b\7\y\n\y\b\f\x\b\k\0\i\9\6\4\x\k\v\9\b\0\m\w\g\g\k\u\i\i\t\n\g\5\0\0\s\r\4\1\w\l\e\6\b\o\u\c\y\3\b\y\e\o\r\4\9\1\s\l\u\s\h\q\a\d\e\1\l\r\x\c\x\6\z\f\q\y\m\6\p\m\4\0\3\e\t\3\s\w\h\o\8\x\t\i\g\s\h\p\d\n\d\5\u\u\m\s\8\i\2\g\5\l\d\r\a\s\i\h\5\6\v\d\m\d\k\0\s\d\w\e\a\v\q\k\3\b\s\s\q\e\x\d\5\p\6\2\n\i\p\b\j\b\w\m\j\q\x\v\s\t\a\2\9\5\2\l\a\3\6\p\6\7\n\x\n\o\d\v\2\f\a\9\z\j\n\c\k\c\k\9\a\1\3\z\x\2\z\d\h\d\o\u\5\y\z\w\3\x\o\g\a\0\g\k\b\r\k\e\a\u\e\5\1\s\7\b\o\p\2\6\n\6\2\1\r\8\d\r\u\7\4\k\a\d\c\t\n\h\i\9\t\u\z\n\i\b\0\8\x\l\j\u\t\o\9\c\5\6\v\5\l\2\6\r\p\s\k\g\1\d\d\v\7\n\j\a\l\k\l\d\i\p\s\o\4\o\z\a\p\x\f\p\s\z\t\f\2\4\3\6\7\g\s\x\j\3\6\3\u\k\l\6\t\7\9\s\c\f\o\4\5\z\r\8\z\j\4\h\g\l\2\l\p\w\y\8\i\r\m\a\4\r\j\o\7\6\w\s\g\n\g\r\q\7\j\e\2\v\7\t\u\0\t\x\9\1\5\h\r\0\t\h\x\a\d\u\9\u\b\s\7\l\o\p\y\0\d\9\u\g\1\e\y\c\5\i\f\7\p\b\o\8\f\5\7\f\e\t\b\2\s\m\k\9\k\y\z\r\l\8\4\x\b\s\9\x\i\6\8\5\4\1\t\2\y\c\k\8\3\s\4\b\n\k\9\9\1\t\w\r\6\o\8\0\x\q\n\w\w\u\6\1\j\s\4\e\e\x\w\8\e\0\d\y\q\g\q\d\a\2\c\p\k\r\v\w\5\9\i\j\3\m\s\7\h\e\a\p\o\u\i\x\z\k\a\j\z\1\v\m\x\f\a\h\0\5\9\p\3\9\i\s\w\k\q\b\z\f\9\g\t\1\c\j\c\x\9\n\8\c\8\0\a\f\a\r\j\k\y\m\6\f\t\e\3\0\7\t\7\k\d\g\8\h\8\u\f\b\5\p\v\9\t\n\m\g\7\x\4\k\k\m\c\z\t\c\f\4\r\n\r\8\n\m\m\q\o\6\7\t\h\n\y\a\2\f\5\c\w\2\7\k\8\q\l\1\g\g\k\w\x\z\i\f\2\n\8\5\1\5\3\j\s\g\e\g\z\q\7\z\d\m\x\k\n\e\f\g\6\h\u\5\s\a\4\s\8\s\r\k\g\h\e\h\d\4\l\j\3\x\i\t\i\t\s\0\1 ]] 00:07:52.338 08:20:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:52.597 08:20:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:07:52.597 08:20:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:07:52.597 08:20:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:52.597 08:20:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:52.597 [2024-07-15 08:20:44.758045] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:52.597 [2024-07-15 08:20:44.758148] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64244 ] 00:07:52.855 { 00:07:52.855 "subsystems": [ 00:07:52.855 { 00:07:52.855 "subsystem": "bdev", 00:07:52.855 "config": [ 00:07:52.855 { 00:07:52.855 "params": { 00:07:52.855 "block_size": 512, 00:07:52.855 "num_blocks": 1048576, 00:07:52.855 "name": "malloc0" 00:07:52.855 }, 00:07:52.855 "method": "bdev_malloc_create" 00:07:52.855 }, 00:07:52.855 { 00:07:52.855 "params": { 00:07:52.855 "filename": "/dev/zram1", 00:07:52.855 "name": "uring0" 00:07:52.855 }, 00:07:52.855 "method": "bdev_uring_create" 00:07:52.856 }, 00:07:52.856 { 00:07:52.856 "method": "bdev_wait_for_examine" 00:07:52.856 } 00:07:52.856 ] 00:07:52.856 } 00:07:52.856 ] 00:07:52.856 } 00:07:52.856 [2024-07-15 08:20:44.893872] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.856 [2024-07-15 08:20:45.013421] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.114 [2024-07-15 08:20:45.068198] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:57.187  Copying: 147/512 [MB] (147 MBps) Copying: 294/512 [MB] (147 MBps) Copying: 441/512 [MB] (146 MBps) Copying: 512/512 [MB] (average 145 MBps) 00:07:57.187 00:07:57.187 08:20:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:07:57.187 08:20:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:07:57.187 08:20:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:07:57.187 08:20:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:07:57.187 08:20:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:07:57.187 08:20:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:07:57.187 08:20:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:57.187 08:20:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:57.187 [2024-07-15 08:20:49.278930] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:57.187 [2024-07-15 08:20:49.279040] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64307 ] 00:07:57.187 { 00:07:57.187 "subsystems": [ 00:07:57.187 { 00:07:57.187 "subsystem": "bdev", 00:07:57.187 "config": [ 00:07:57.187 { 00:07:57.187 "params": { 00:07:57.187 "block_size": 512, 00:07:57.187 "num_blocks": 1048576, 00:07:57.187 "name": "malloc0" 00:07:57.187 }, 00:07:57.187 "method": "bdev_malloc_create" 00:07:57.187 }, 00:07:57.187 { 00:07:57.187 "params": { 00:07:57.187 "filename": "/dev/zram1", 00:07:57.187 "name": "uring0" 00:07:57.187 }, 00:07:57.187 "method": "bdev_uring_create" 00:07:57.187 }, 00:07:57.187 { 00:07:57.187 "params": { 00:07:57.187 "name": "uring0" 00:07:57.187 }, 00:07:57.187 "method": "bdev_uring_delete" 00:07:57.187 }, 00:07:57.187 { 00:07:57.187 "method": "bdev_wait_for_examine" 00:07:57.187 } 00:07:57.187 ] 00:07:57.187 } 00:07:57.187 ] 00:07:57.187 } 00:07:57.446 [2024-07-15 08:20:49.415332] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.446 [2024-07-15 08:20:49.537581] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.446 [2024-07-15 08:20:49.593272] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:58.273  Copying: 0/0 [B] (average 0 Bps) 00:07:58.273 00:07:58.273 08:20:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:07:58.273 08:20:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:58.273 08:20:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:07:58.273 08:20:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@648 -- # local es=0 00:07:58.273 08:20:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:58.273 08:20:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:58.273 08:20:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:58.274 08:20:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:58.274 08:20:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:58.274 08:20:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:58.274 08:20:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:58.274 08:20:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:58.274 08:20:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:58.274 08:20:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:58.274 08:20:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:58.274 08:20:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:58.274 [2024-07-15 08:20:50.309541] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:58.274 [2024-07-15 08:20:50.309696] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64336 ] 00:07:58.274 { 00:07:58.274 "subsystems": [ 00:07:58.274 { 00:07:58.274 "subsystem": "bdev", 00:07:58.274 "config": [ 00:07:58.274 { 00:07:58.274 "params": { 00:07:58.274 "block_size": 512, 00:07:58.274 "num_blocks": 1048576, 00:07:58.274 "name": "malloc0" 00:07:58.274 }, 00:07:58.274 "method": "bdev_malloc_create" 00:07:58.274 }, 00:07:58.274 { 00:07:58.274 "params": { 00:07:58.274 "filename": "/dev/zram1", 00:07:58.274 "name": "uring0" 00:07:58.274 }, 00:07:58.274 "method": "bdev_uring_create" 00:07:58.274 }, 00:07:58.274 { 00:07:58.274 "params": { 00:07:58.274 "name": "uring0" 00:07:58.274 }, 00:07:58.274 "method": "bdev_uring_delete" 00:07:58.274 }, 00:07:58.274 { 00:07:58.274 "method": "bdev_wait_for_examine" 00:07:58.274 } 00:07:58.274 ] 00:07:58.274 } 00:07:58.274 ] 00:07:58.274 } 00:07:58.532 [2024-07-15 08:20:50.453053] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.532 [2024-07-15 08:20:50.574983] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.532 [2024-07-15 08:20:50.630891] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:58.791 [2024-07-15 08:20:50.841753] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:07:58.791 [2024-07-15 08:20:50.841810] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:07:58.791 [2024-07-15 08:20:50.841823] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:07:58.791 [2024-07-15 08:20:50.841833] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:59.049 [2024-07-15 08:20:51.164604] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:59.308 08:20:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@651 -- # es=237 00:07:59.308 08:20:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:59.308 08:20:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@660 -- # es=109 00:07:59.308 08:20:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@661 -- # case "$es" in 00:07:59.308 08:20:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@668 -- # es=1 00:07:59.308 08:20:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:59.308 08:20:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:07:59.308 08:20:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # local id=1 00:07:59.308 08:20:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@174 -- # [[ -e /sys/block/zram1 ]] 00:07:59.308 08:20:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@176 -- # echo 1 00:07:59.308 08:20:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # echo 1 00:07:59.308 08:20:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:59.567 00:07:59.567 real 0m15.822s 00:07:59.567 user 0m10.675s 00:07:59.567 sys 0m12.783s 00:07:59.567 08:20:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:59.567 08:20:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:59.567 ************************************ 00:07:59.567 END TEST dd_uring_copy 00:07:59.567 ************************************ 00:07:59.567 08:20:51 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1142 -- # return 0 00:07:59.567 ************************************ 00:07:59.567 END TEST spdk_dd_uring 00:07:59.567 ************************************ 00:07:59.567 00:07:59.567 real 0m15.964s 00:07:59.567 user 0m10.742s 00:07:59.567 sys 0m12.859s 00:07:59.567 08:20:51 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:59.567 08:20:51 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:07:59.567 08:20:51 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:07:59.567 08:20:51 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:07:59.567 08:20:51 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:59.567 08:20:51 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:59.567 08:20:51 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:59.567 ************************************ 00:07:59.567 START TEST spdk_dd_sparse 00:07:59.567 ************************************ 00:07:59.567 08:20:51 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:07:59.567 * Looking for test storage... 00:07:59.567 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:59.567 08:20:51 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:59.567 08:20:51 spdk_dd.spdk_dd_sparse -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:59.567 08:20:51 spdk_dd.spdk_dd_sparse -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:59.567 08:20:51 spdk_dd.spdk_dd_sparse -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:59.567 08:20:51 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.567 08:20:51 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.567 08:20:51 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.567 08:20:51 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:07:59.567 08:20:51 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.567 08:20:51 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:07:59.567 08:20:51 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:07:59.567 08:20:51 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:07:59.568 08:20:51 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:07:59.568 08:20:51 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:07:59.568 08:20:51 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:07:59.568 08:20:51 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:07:59.568 08:20:51 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:07:59.568 08:20:51 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:07:59.568 08:20:51 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:07:59.568 08:20:51 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:07:59.568 1+0 records in 00:07:59.568 1+0 records out 00:07:59.568 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00488613 s, 858 MB/s 00:07:59.568 08:20:51 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:07:59.568 1+0 records in 00:07:59.568 1+0 records out 00:07:59.568 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00759353 s, 552 MB/s 00:07:59.568 08:20:51 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:07:59.827 1+0 records in 00:07:59.827 1+0 records out 00:07:59.827 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00811714 s, 517 MB/s 00:07:59.827 08:20:51 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:07:59.827 08:20:51 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:59.827 08:20:51 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:59.827 08:20:51 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:59.827 ************************************ 00:07:59.827 START TEST dd_sparse_file_to_file 00:07:59.827 ************************************ 00:07:59.827 08:20:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1123 -- # file_to_file 00:07:59.827 08:20:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:07:59.827 08:20:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:07:59.827 08:20:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:59.827 08:20:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:07:59.827 08:20:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:07:59.827 08:20:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:07:59.827 08:20:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:07:59.827 08:20:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:07:59.827 08:20:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:07:59.827 08:20:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:59.827 [2024-07-15 08:20:51.809390] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:59.827 [2024-07-15 08:20:51.809871] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64428 ] 00:07:59.827 { 00:07:59.827 "subsystems": [ 00:07:59.827 { 00:07:59.827 "subsystem": "bdev", 00:07:59.827 "config": [ 00:07:59.827 { 00:07:59.827 "params": { 00:07:59.827 "block_size": 4096, 00:07:59.827 "filename": "dd_sparse_aio_disk", 00:07:59.827 "name": "dd_aio" 00:07:59.827 }, 00:07:59.827 "method": "bdev_aio_create" 00:07:59.827 }, 00:07:59.827 { 00:07:59.827 "params": { 00:07:59.827 "lvs_name": "dd_lvstore", 00:07:59.827 "bdev_name": "dd_aio" 00:07:59.827 }, 00:07:59.827 "method": "bdev_lvol_create_lvstore" 00:07:59.827 }, 00:07:59.827 { 00:07:59.827 "method": "bdev_wait_for_examine" 00:07:59.827 } 00:07:59.827 ] 00:07:59.827 } 00:07:59.827 ] 00:07:59.827 } 00:07:59.827 [2024-07-15 08:20:51.948511] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.085 [2024-07-15 08:20:52.097268] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.085 [2024-07-15 08:20:52.158756] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:00.601  Copying: 12/36 [MB] (average 800 MBps) 00:08:00.601 00:08:00.601 08:20:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:08:00.601 08:20:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:08:00.601 08:20:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:08:00.601 08:20:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:08:00.601 08:20:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:08:00.601 08:20:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:08:00.601 08:20:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:08:00.601 08:20:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:08:00.601 ************************************ 00:08:00.601 END TEST dd_sparse_file_to_file 00:08:00.601 ************************************ 00:08:00.601 08:20:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:08:00.601 08:20:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:08:00.601 00:08:00.601 real 0m0.798s 00:08:00.601 user 0m0.508s 00:08:00.601 sys 0m0.393s 00:08:00.601 08:20:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:00.601 08:20:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:00.601 08:20:52 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 00:08:00.601 08:20:52 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:08:00.601 08:20:52 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:00.601 08:20:52 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:00.601 08:20:52 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:00.601 ************************************ 00:08:00.601 START TEST dd_sparse_file_to_bdev 00:08:00.601 ************************************ 00:08:00.601 08:20:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1123 -- # file_to_bdev 00:08:00.601 08:20:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:00.601 08:20:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:08:00.601 08:20:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:08:00.601 08:20:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:08:00.601 08:20:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:08:00.601 08:20:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:08:00.601 08:20:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:00.601 08:20:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:00.601 [2024-07-15 08:20:52.659587] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:00.601 [2024-07-15 08:20:52.659699] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64470 ] 00:08:00.601 { 00:08:00.602 "subsystems": [ 00:08:00.602 { 00:08:00.602 "subsystem": "bdev", 00:08:00.602 "config": [ 00:08:00.602 { 00:08:00.602 "params": { 00:08:00.602 "block_size": 4096, 00:08:00.602 "filename": "dd_sparse_aio_disk", 00:08:00.602 "name": "dd_aio" 00:08:00.602 }, 00:08:00.602 "method": "bdev_aio_create" 00:08:00.602 }, 00:08:00.602 { 00:08:00.602 "params": { 00:08:00.602 "lvs_name": "dd_lvstore", 00:08:00.602 "lvol_name": "dd_lvol", 00:08:00.602 "size_in_mib": 36, 00:08:00.602 "thin_provision": true 00:08:00.602 }, 00:08:00.602 "method": "bdev_lvol_create" 00:08:00.602 }, 00:08:00.602 { 00:08:00.602 "method": "bdev_wait_for_examine" 00:08:00.602 } 00:08:00.602 ] 00:08:00.602 } 00:08:00.602 ] 00:08:00.602 } 00:08:00.859 [2024-07-15 08:20:52.798139] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.859 [2024-07-15 08:20:52.915593] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.859 [2024-07-15 08:20:52.968168] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:01.375  Copying: 12/36 [MB] (average 521 MBps) 00:08:01.375 00:08:01.375 ************************************ 00:08:01.375 END TEST dd_sparse_file_to_bdev 00:08:01.375 ************************************ 00:08:01.375 00:08:01.375 real 0m0.708s 00:08:01.375 user 0m0.466s 00:08:01.375 sys 0m0.354s 00:08:01.375 08:20:53 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:01.375 08:20:53 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:01.375 08:20:53 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 00:08:01.375 08:20:53 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:08:01.375 08:20:53 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:01.375 08:20:53 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:01.375 08:20:53 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:01.375 ************************************ 00:08:01.375 START TEST dd_sparse_bdev_to_file 00:08:01.375 ************************************ 00:08:01.375 08:20:53 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1123 -- # bdev_to_file 00:08:01.375 08:20:53 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:08:01.375 08:20:53 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:08:01.375 08:20:53 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:01.375 08:20:53 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:08:01.375 08:20:53 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:08:01.375 08:20:53 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:08:01.375 08:20:53 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:08:01.375 08:20:53 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:01.375 [2024-07-15 08:20:53.414052] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:01.375 [2024-07-15 08:20:53.414161] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64508 ] 00:08:01.375 { 00:08:01.375 "subsystems": [ 00:08:01.375 { 00:08:01.375 "subsystem": "bdev", 00:08:01.375 "config": [ 00:08:01.375 { 00:08:01.375 "params": { 00:08:01.375 "block_size": 4096, 00:08:01.375 "filename": "dd_sparse_aio_disk", 00:08:01.375 "name": "dd_aio" 00:08:01.375 }, 00:08:01.375 "method": "bdev_aio_create" 00:08:01.375 }, 00:08:01.375 { 00:08:01.375 "method": "bdev_wait_for_examine" 00:08:01.375 } 00:08:01.375 ] 00:08:01.375 } 00:08:01.375 ] 00:08:01.375 } 00:08:01.632 [2024-07-15 08:20:53.555090] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.632 [2024-07-15 08:20:53.682703] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.632 [2024-07-15 08:20:53.739287] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:02.148  Copying: 12/36 [MB] (average 1090 MBps) 00:08:02.148 00:08:02.148 08:20:54 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:08:02.148 08:20:54 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:08:02.148 08:20:54 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:08:02.148 08:20:54 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:08:02.148 08:20:54 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:08:02.148 08:20:54 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:08:02.148 08:20:54 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:08:02.148 08:20:54 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:08:02.148 ************************************ 00:08:02.148 END TEST dd_sparse_bdev_to_file 00:08:02.148 ************************************ 00:08:02.148 08:20:54 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:08:02.148 08:20:54 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:08:02.148 00:08:02.148 real 0m0.735s 00:08:02.148 user 0m0.500s 00:08:02.148 sys 0m0.333s 00:08:02.148 08:20:54 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:02.148 08:20:54 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:02.148 08:20:54 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 00:08:02.148 08:20:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:08:02.148 08:20:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:08:02.148 08:20:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:08:02.148 08:20:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:08:02.148 08:20:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:08:02.148 ************************************ 00:08:02.148 END TEST spdk_dd_sparse 00:08:02.148 ************************************ 00:08:02.148 00:08:02.148 real 0m2.541s 00:08:02.148 user 0m1.575s 00:08:02.148 sys 0m1.267s 00:08:02.148 08:20:54 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:02.148 08:20:54 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:02.148 08:20:54 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:08:02.148 08:20:54 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:08:02.148 08:20:54 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:02.148 08:20:54 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:02.148 08:20:54 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:02.148 ************************************ 00:08:02.148 START TEST spdk_dd_negative 00:08:02.148 ************************************ 00:08:02.148 08:20:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:08:02.148 * Looking for test storage... 00:08:02.148 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:02.148 08:20:54 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:02.148 08:20:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:02.148 08:20:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:02.148 08:20:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:02.148 08:20:54 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.148 08:20:54 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.148 08:20:54 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.148 08:20:54 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:08:02.148 08:20:54 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.148 08:20:54 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:02.148 08:20:54 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:02.148 08:20:54 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:02.148 08:20:54 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:02.148 08:20:54 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:08:02.148 08:20:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:02.148 08:20:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:02.148 08:20:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:02.148 ************************************ 00:08:02.148 START TEST dd_invalid_arguments 00:08:02.148 ************************************ 00:08:02.149 08:20:54 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1123 -- # invalid_arguments 00:08:02.149 08:20:54 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:02.149 08:20:54 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@648 -- # local es=0 00:08:02.149 08:20:54 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:02.149 08:20:54 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:02.149 08:20:54 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:02.149 08:20:54 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:02.149 08:20:54 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:02.149 08:20:54 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:02.149 08:20:54 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:02.149 08:20:54 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:02.149 08:20:54 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:02.149 08:20:54 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:02.407 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:08:02.407 00:08:02.407 CPU options: 00:08:02.407 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:08:02.407 (like [0,1,10]) 00:08:02.407 --lcores lcore to CPU mapping list. The list is in the format: 00:08:02.407 [<,lcores[@CPUs]>...] 00:08:02.407 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:08:02.407 Within the group, '-' is used for range separator, 00:08:02.407 ',' is used for single number separator. 00:08:02.407 '( )' can be omitted for single element group, 00:08:02.407 '@' can be omitted if cpus and lcores have the same value 00:08:02.407 --disable-cpumask-locks Disable CPU core lock files. 00:08:02.407 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:08:02.407 pollers in the app support interrupt mode) 00:08:02.407 -p, --main-core main (primary) core for DPDK 00:08:02.407 00:08:02.407 Configuration options: 00:08:02.407 -c, --config, --json JSON config file 00:08:02.407 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:08:02.407 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:08:02.407 --wait-for-rpc wait for RPCs to initialize subsystems 00:08:02.407 --rpcs-allowed comma-separated list of permitted RPCS 00:08:02.407 --json-ignore-init-errors don't exit on invalid config entry 00:08:02.407 00:08:02.407 Memory options: 00:08:02.407 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:08:02.407 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:08:02.407 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:08:02.407 -R, --huge-unlink unlink huge files after initialization 00:08:02.407 -n, --mem-channels number of memory channels used for DPDK 00:08:02.407 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:08:02.407 --msg-mempool-size global message memory pool size in count (default: 262143) 00:08:02.407 --no-huge run without using hugepages 00:08:02.407 -i, --shm-id shared memory ID (optional) 00:08:02.407 -g, --single-file-segments force creating just one hugetlbfs file 00:08:02.407 00:08:02.407 PCI options: 00:08:02.407 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:08:02.407 -B, --pci-blocked pci addr to block (can be used more than once) 00:08:02.407 -u, --no-pci disable PCI access 00:08:02.407 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:08:02.407 00:08:02.407 Log options: 00:08:02.407 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:08:02.407 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:08:02.407 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:08:02.407 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:08:02.407 blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, 00:08:02.407 json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, 00:08:02.407 nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, sock_posix, 00:08:02.407 thread, trace, uring, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, 00:08:02.407 vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, 00:08:02.407 virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, 00:08:02.407 virtio_vfio_user, vmd) 00:08:02.407 --silence-noticelog disable notice level logging to stderr 00:08:02.407 00:08:02.407 Trace options: 00:08:02.407 --num-trace-entries number of trace entries for each core, must be power of 2, 00:08:02.407 setting 0 to disable trace (default 32768) 00:08:02.407 Tracepoints vary in size and can use more than one trace entry. 00:08:02.407 -e, --tpoint-group [:] 00:08:02.407 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:08:02.407 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:08:02.407 [2024-07-15 08:20:54.373151] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:08:02.407 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, all). 00:08:02.407 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:08:02.407 a tracepoint group. First tpoint inside a group can be enabled by 00:08:02.407 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:08:02.408 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:08:02.408 in /include/spdk_internal/trace_defs.h 00:08:02.408 00:08:02.408 Other options: 00:08:02.408 -h, --help show this usage 00:08:02.408 -v, --version print SPDK version 00:08:02.408 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:08:02.408 --env-context Opaque context for use of the env implementation 00:08:02.408 00:08:02.408 Application specific: 00:08:02.408 [--------- DD Options ---------] 00:08:02.408 --if Input file. Must specify either --if or --ib. 00:08:02.408 --ib Input bdev. Must specifier either --if or --ib 00:08:02.408 --of Output file. Must specify either --of or --ob. 00:08:02.408 --ob Output bdev. Must specify either --of or --ob. 00:08:02.408 --iflag Input file flags. 00:08:02.408 --oflag Output file flags. 00:08:02.408 --bs I/O unit size (default: 4096) 00:08:02.408 --qd Queue depth (default: 2) 00:08:02.408 --count I/O unit count. The number of I/O units to copy. (default: all) 00:08:02.408 --skip Skip this many I/O units at start of input. (default: 0) 00:08:02.408 --seek Skip this many I/O units at start of output. (default: 0) 00:08:02.408 --aio Force usage of AIO. (by default io_uring is used if available) 00:08:02.408 --sparse Enable hole skipping in input target 00:08:02.408 Available iflag and oflag values: 00:08:02.408 append - append mode 00:08:02.408 direct - use direct I/O for data 00:08:02.408 directory - fail unless a directory 00:08:02.408 dsync - use synchronized I/O for data 00:08:02.408 noatime - do not update access time 00:08:02.408 noctty - do not assign controlling terminal from file 00:08:02.408 nofollow - do not follow symlinks 00:08:02.408 nonblock - use non-blocking I/O 00:08:02.408 sync - use synchronized I/O for data and metadata 00:08:02.408 08:20:54 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@651 -- # es=2 00:08:02.408 08:20:54 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:02.408 08:20:54 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:02.408 08:20:54 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:02.408 00:08:02.408 real 0m0.083s 00:08:02.408 user 0m0.055s 00:08:02.408 sys 0m0.026s 00:08:02.408 08:20:54 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:02.408 08:20:54 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:08:02.408 ************************************ 00:08:02.408 END TEST dd_invalid_arguments 00:08:02.408 ************************************ 00:08:02.408 08:20:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:08:02.408 08:20:54 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:08:02.408 08:20:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:02.408 08:20:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:02.408 08:20:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:02.408 ************************************ 00:08:02.408 START TEST dd_double_input 00:08:02.408 ************************************ 00:08:02.408 08:20:54 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1123 -- # double_input 00:08:02.408 08:20:54 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:02.408 08:20:54 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@648 -- # local es=0 00:08:02.408 08:20:54 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:02.408 08:20:54 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:02.408 08:20:54 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:02.408 08:20:54 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:02.408 08:20:54 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:02.408 08:20:54 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:02.408 08:20:54 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:02.408 08:20:54 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:02.408 08:20:54 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:02.408 08:20:54 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:02.408 [2024-07-15 08:20:54.500435] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:08:02.408 08:20:54 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@651 -- # es=22 00:08:02.408 08:20:54 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:02.408 08:20:54 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:02.408 08:20:54 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:02.408 00:08:02.408 real 0m0.077s 00:08:02.408 user 0m0.047s 00:08:02.408 sys 0m0.028s 00:08:02.408 08:20:54 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:02.408 08:20:54 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:08:02.408 ************************************ 00:08:02.408 END TEST dd_double_input 00:08:02.408 ************************************ 00:08:02.408 08:20:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:08:02.408 08:20:54 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:08:02.408 08:20:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:02.408 08:20:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:02.408 08:20:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:02.408 ************************************ 00:08:02.408 START TEST dd_double_output 00:08:02.408 ************************************ 00:08:02.408 08:20:54 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1123 -- # double_output 00:08:02.408 08:20:54 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:02.408 08:20:54 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@648 -- # local es=0 00:08:02.408 08:20:54 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:02.408 08:20:54 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:02.408 08:20:54 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:02.408 08:20:54 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:02.668 08:20:54 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:02.668 08:20:54 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:02.668 08:20:54 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:02.668 08:20:54 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:02.668 08:20:54 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:02.668 08:20:54 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:02.668 [2024-07-15 08:20:54.629362] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:08:02.668 08:20:54 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@651 -- # es=22 00:08:02.668 08:20:54 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:02.668 08:20:54 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:02.668 08:20:54 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:02.668 00:08:02.668 real 0m0.075s 00:08:02.668 user 0m0.049s 00:08:02.668 sys 0m0.025s 00:08:02.668 08:20:54 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:02.668 08:20:54 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:08:02.668 ************************************ 00:08:02.668 END TEST dd_double_output 00:08:02.668 ************************************ 00:08:02.668 08:20:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:08:02.668 08:20:54 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:08:02.668 08:20:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:02.668 08:20:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:02.668 08:20:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:02.668 ************************************ 00:08:02.668 START TEST dd_no_input 00:08:02.668 ************************************ 00:08:02.668 08:20:54 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1123 -- # no_input 00:08:02.668 08:20:54 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:02.668 08:20:54 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@648 -- # local es=0 00:08:02.668 08:20:54 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:02.668 08:20:54 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:02.668 08:20:54 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:02.668 08:20:54 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:02.668 08:20:54 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:02.668 08:20:54 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:02.668 08:20:54 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:02.668 08:20:54 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:02.668 08:20:54 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:02.668 08:20:54 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:02.668 [2024-07-15 08:20:54.757235] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:08:02.668 08:20:54 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@651 -- # es=22 00:08:02.668 08:20:54 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:02.668 08:20:54 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:02.668 08:20:54 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:02.668 00:08:02.668 real 0m0.072s 00:08:02.668 user 0m0.043s 00:08:02.668 sys 0m0.028s 00:08:02.668 08:20:54 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:02.668 08:20:54 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:08:02.668 ************************************ 00:08:02.668 END TEST dd_no_input 00:08:02.668 ************************************ 00:08:02.668 08:20:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:08:02.668 08:20:54 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:08:02.668 08:20:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:02.668 08:20:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:02.668 08:20:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:02.668 ************************************ 00:08:02.668 START TEST dd_no_output 00:08:02.668 ************************************ 00:08:02.668 08:20:54 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1123 -- # no_output 00:08:02.668 08:20:54 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:02.668 08:20:54 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@648 -- # local es=0 00:08:02.668 08:20:54 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:02.668 08:20:54 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:02.668 08:20:54 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:02.668 08:20:54 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:02.668 08:20:54 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:02.668 08:20:54 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:02.668 08:20:54 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:02.668 08:20:54 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:02.668 08:20:54 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:02.668 08:20:54 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:02.926 [2024-07-15 08:20:54.881438] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:08:02.926 08:20:54 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@651 -- # es=22 00:08:02.926 08:20:54 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:02.926 08:20:54 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:02.926 08:20:54 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:02.926 00:08:02.926 real 0m0.073s 00:08:02.926 user 0m0.044s 00:08:02.926 sys 0m0.028s 00:08:02.926 08:20:54 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:02.926 08:20:54 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:08:02.926 ************************************ 00:08:02.926 END TEST dd_no_output 00:08:02.926 ************************************ 00:08:02.926 08:20:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:08:02.926 08:20:54 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:08:02.926 08:20:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:02.926 08:20:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:02.926 08:20:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:02.926 ************************************ 00:08:02.926 START TEST dd_wrong_blocksize 00:08:02.926 ************************************ 00:08:02.926 08:20:54 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1123 -- # wrong_blocksize 00:08:02.926 08:20:54 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:02.926 08:20:54 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@648 -- # local es=0 00:08:02.926 08:20:54 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:02.926 08:20:54 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:02.926 08:20:54 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:02.926 08:20:54 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:02.926 08:20:54 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:02.926 08:20:54 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:02.926 08:20:54 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:02.926 08:20:54 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:02.926 08:20:54 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:02.926 08:20:54 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:02.926 [2024-07-15 08:20:55.012629] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:08:02.926 08:20:55 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@651 -- # es=22 00:08:02.926 08:20:55 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:02.926 08:20:55 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:02.926 08:20:55 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:02.926 00:08:02.926 real 0m0.077s 00:08:02.926 user 0m0.050s 00:08:02.926 sys 0m0.025s 00:08:02.926 08:20:55 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:02.926 08:20:55 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:08:02.926 ************************************ 00:08:02.926 END TEST dd_wrong_blocksize 00:08:02.926 ************************************ 00:08:02.926 08:20:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:08:02.926 08:20:55 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:08:02.926 08:20:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:02.926 08:20:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:02.927 08:20:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:02.927 ************************************ 00:08:02.927 START TEST dd_smaller_blocksize 00:08:02.927 ************************************ 00:08:02.927 08:20:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1123 -- # smaller_blocksize 00:08:02.927 08:20:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:02.927 08:20:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@648 -- # local es=0 00:08:02.927 08:20:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:02.927 08:20:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:02.927 08:20:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:02.927 08:20:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:02.927 08:20:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:02.927 08:20:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:02.927 08:20:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:02.927 08:20:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:02.927 08:20:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:02.927 08:20:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:03.184 [2024-07-15 08:20:55.140048] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:03.184 [2024-07-15 08:20:55.140150] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64731 ] 00:08:03.184 [2024-07-15 08:20:55.278252] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.441 [2024-07-15 08:20:55.395517] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.441 [2024-07-15 08:20:55.447694] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:03.698 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:08:03.698 [2024-07-15 08:20:55.752971] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:08:03.698 [2024-07-15 08:20:55.753076] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:03.698 [2024-07-15 08:20:55.864845] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:03.956 08:20:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@651 -- # es=244 00:08:03.956 08:20:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:03.956 08:20:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@660 -- # es=116 00:08:03.956 08:20:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@661 -- # case "$es" in 00:08:03.956 08:20:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@668 -- # es=1 00:08:03.956 08:20:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:03.956 00:08:03.956 real 0m0.881s 00:08:03.956 user 0m0.410s 00:08:03.956 sys 0m0.364s 00:08:03.956 08:20:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:03.956 08:20:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:08:03.956 ************************************ 00:08:03.956 END TEST dd_smaller_blocksize 00:08:03.956 ************************************ 00:08:03.956 08:20:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:08:03.956 08:20:56 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:08:03.956 08:20:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:03.956 08:20:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:03.956 08:20:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:03.956 ************************************ 00:08:03.956 START TEST dd_invalid_count 00:08:03.956 ************************************ 00:08:03.956 08:20:56 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1123 -- # invalid_count 00:08:03.956 08:20:56 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:03.956 08:20:56 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@648 -- # local es=0 00:08:03.957 08:20:56 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:03.957 08:20:56 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:03.957 08:20:56 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:03.957 08:20:56 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:03.957 08:20:56 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:03.957 08:20:56 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:03.957 08:20:56 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:03.957 08:20:56 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:03.957 08:20:56 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:03.957 08:20:56 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:03.957 [2024-07-15 08:20:56.082329] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:08:03.957 08:20:56 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@651 -- # es=22 00:08:03.957 08:20:56 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:03.957 08:20:56 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:03.957 08:20:56 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:03.957 00:08:03.957 real 0m0.077s 00:08:03.957 user 0m0.051s 00:08:03.957 sys 0m0.025s 00:08:03.957 08:20:56 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:03.957 08:20:56 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:08:03.957 ************************************ 00:08:03.957 END TEST dd_invalid_count 00:08:03.957 ************************************ 00:08:04.215 08:20:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:08:04.215 08:20:56 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:08:04.215 08:20:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:04.215 08:20:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:04.215 08:20:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:04.215 ************************************ 00:08:04.215 START TEST dd_invalid_oflag 00:08:04.215 ************************************ 00:08:04.215 08:20:56 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1123 -- # invalid_oflag 00:08:04.215 08:20:56 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:04.215 08:20:56 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@648 -- # local es=0 00:08:04.215 08:20:56 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:04.215 08:20:56 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:04.215 08:20:56 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:04.215 08:20:56 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:04.215 08:20:56 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:04.215 08:20:56 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:04.215 08:20:56 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:04.215 08:20:56 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:04.215 08:20:56 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:04.215 08:20:56 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:04.215 [2024-07-15 08:20:56.210490] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:08:04.215 08:20:56 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@651 -- # es=22 00:08:04.215 08:20:56 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:04.215 08:20:56 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:04.215 08:20:56 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:04.215 00:08:04.215 real 0m0.074s 00:08:04.215 user 0m0.047s 00:08:04.215 sys 0m0.024s 00:08:04.215 08:20:56 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:04.215 08:20:56 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:08:04.215 ************************************ 00:08:04.215 END TEST dd_invalid_oflag 00:08:04.215 ************************************ 00:08:04.215 08:20:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:08:04.215 08:20:56 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:08:04.215 08:20:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:04.215 08:20:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:04.215 08:20:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:04.215 ************************************ 00:08:04.215 START TEST dd_invalid_iflag 00:08:04.215 ************************************ 00:08:04.215 08:20:56 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1123 -- # invalid_iflag 00:08:04.215 08:20:56 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:04.215 08:20:56 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@648 -- # local es=0 00:08:04.215 08:20:56 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:04.215 08:20:56 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:04.215 08:20:56 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:04.215 08:20:56 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:04.215 08:20:56 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:04.215 08:20:56 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:04.215 08:20:56 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:04.215 08:20:56 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:04.215 08:20:56 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:04.215 08:20:56 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:04.215 [2024-07-15 08:20:56.333413] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:08:04.215 08:20:56 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@651 -- # es=22 00:08:04.215 08:20:56 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:04.215 08:20:56 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:04.215 08:20:56 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:04.215 00:08:04.215 real 0m0.072s 00:08:04.215 user 0m0.043s 00:08:04.215 sys 0m0.029s 00:08:04.215 08:20:56 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:04.215 08:20:56 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:08:04.215 ************************************ 00:08:04.215 END TEST dd_invalid_iflag 00:08:04.215 ************************************ 00:08:04.472 08:20:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:08:04.472 08:20:56 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:08:04.472 08:20:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:04.473 08:20:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:04.473 08:20:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:04.473 ************************************ 00:08:04.473 START TEST dd_unknown_flag 00:08:04.473 ************************************ 00:08:04.473 08:20:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1123 -- # unknown_flag 00:08:04.473 08:20:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:04.473 08:20:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@648 -- # local es=0 00:08:04.473 08:20:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:04.473 08:20:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:04.473 08:20:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:04.473 08:20:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:04.473 08:20:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:04.473 08:20:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:04.473 08:20:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:04.473 08:20:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:04.473 08:20:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:04.473 08:20:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:04.473 [2024-07-15 08:20:56.458625] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:04.473 [2024-07-15 08:20:56.458744] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64824 ] 00:08:04.473 [2024-07-15 08:20:56.595926] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.730 [2024-07-15 08:20:56.730813] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.730 [2024-07-15 08:20:56.792646] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:04.730 [2024-07-15 08:20:56.831679] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:08:04.730 [2024-07-15 08:20:56.831763] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:04.730 [2024-07-15 08:20:56.831829] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:08:04.730 [2024-07-15 08:20:56.831846] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:04.730 [2024-07-15 08:20:56.832114] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:08:04.730 [2024-07-15 08:20:56.832135] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:04.730 [2024-07-15 08:20:56.832195] app.c:1039:app_stop: *NOTICE*: spdk_app_stop called twice 00:08:04.730 [2024-07-15 08:20:56.832208] app.c:1039:app_stop: *NOTICE*: spdk_app_stop called twice 00:08:04.987 [2024-07-15 08:20:56.950495] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:04.987 08:20:57 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@651 -- # es=234 00:08:04.987 08:20:57 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:04.987 08:20:57 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@660 -- # es=106 00:08:04.987 08:20:57 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@661 -- # case "$es" in 00:08:04.987 08:20:57 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@668 -- # es=1 00:08:04.987 08:20:57 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:04.987 00:08:04.987 real 0m0.658s 00:08:04.987 user 0m0.384s 00:08:04.987 sys 0m0.174s 00:08:04.987 08:20:57 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:04.987 ************************************ 00:08:04.987 END TEST dd_unknown_flag 00:08:04.987 08:20:57 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:08:04.987 ************************************ 00:08:04.987 08:20:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:08:04.987 08:20:57 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:08:04.987 08:20:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:04.987 08:20:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:04.987 08:20:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:04.987 ************************************ 00:08:04.987 START TEST dd_invalid_json 00:08:04.987 ************************************ 00:08:04.987 08:20:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1123 -- # invalid_json 00:08:04.987 08:20:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:04.987 08:20:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@648 -- # local es=0 00:08:04.987 08:20:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:04.987 08:20:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:04.987 08:20:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # : 00:08:04.987 08:20:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:04.987 08:20:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:04.987 08:20:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:04.987 08:20:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:04.987 08:20:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:04.987 08:20:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:04.987 08:20:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:04.987 08:20:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:05.245 [2024-07-15 08:20:57.184040] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:05.245 [2024-07-15 08:20:57.184152] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64853 ] 00:08:05.245 [2024-07-15 08:20:57.325993] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.502 [2024-07-15 08:20:57.455040] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.502 [2024-07-15 08:20:57.455146] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:08:05.502 [2024-07-15 08:20:57.455168] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:05.502 [2024-07-15 08:20:57.455180] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:05.502 [2024-07-15 08:20:57.455225] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:05.502 08:20:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@651 -- # es=234 00:08:05.502 08:20:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:05.502 08:20:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@660 -- # es=106 00:08:05.502 08:20:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@661 -- # case "$es" in 00:08:05.502 08:20:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@668 -- # es=1 00:08:05.502 08:20:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:05.502 00:08:05.502 real 0m0.450s 00:08:05.502 user 0m0.276s 00:08:05.502 sys 0m0.072s 00:08:05.502 08:20:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:05.502 08:20:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:08:05.502 ************************************ 00:08:05.502 END TEST dd_invalid_json 00:08:05.502 ************************************ 00:08:05.502 08:20:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:08:05.502 00:08:05.502 real 0m3.398s 00:08:05.502 user 0m1.730s 00:08:05.502 sys 0m1.292s 00:08:05.502 08:20:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:05.502 08:20:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:05.502 ************************************ 00:08:05.502 END TEST spdk_dd_negative 00:08:05.502 ************************************ 00:08:05.502 08:20:57 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:08:05.502 00:08:05.502 real 1m20.789s 00:08:05.502 user 0m53.386s 00:08:05.502 sys 0m33.648s 00:08:05.502 08:20:57 spdk_dd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:05.502 08:20:57 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:05.502 ************************************ 00:08:05.502 END TEST spdk_dd 00:08:05.502 ************************************ 00:08:05.774 08:20:57 -- common/autotest_common.sh@1142 -- # return 0 00:08:05.774 08:20:57 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:08:05.774 08:20:57 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:08:05.774 08:20:57 -- spdk/autotest.sh@260 -- # timing_exit lib 00:08:05.774 08:20:57 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:05.774 08:20:57 -- common/autotest_common.sh@10 -- # set +x 00:08:05.774 08:20:57 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:08:05.774 08:20:57 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:08:05.774 08:20:57 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:08:05.774 08:20:57 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:08:05.774 08:20:57 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:08:05.774 08:20:57 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:08:05.775 08:20:57 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:05.775 08:20:57 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:05.775 08:20:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:05.775 08:20:57 -- common/autotest_common.sh@10 -- # set +x 00:08:05.775 ************************************ 00:08:05.775 START TEST nvmf_tcp 00:08:05.775 ************************************ 00:08:05.775 08:20:57 nvmf_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:05.775 * Looking for test storage... 00:08:05.775 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:08:05.775 08:20:57 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:08:05.775 08:20:57 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:05.775 08:20:57 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:05.775 08:20:57 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:08:05.775 08:20:57 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:05.775 08:20:57 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:05.775 08:20:57 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:05.775 08:20:57 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:05.775 08:20:57 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:05.775 08:20:57 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:05.775 08:20:57 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:05.775 08:20:57 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:05.775 08:20:57 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:05.775 08:20:57 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:05.775 08:20:57 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:08:05.775 08:20:57 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:08:05.775 08:20:57 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:05.775 08:20:57 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:05.775 08:20:57 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:05.775 08:20:57 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:05.775 08:20:57 nvmf_tcp -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:05.775 08:20:57 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:05.775 08:20:57 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:05.775 08:20:57 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:05.775 08:20:57 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.775 08:20:57 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.775 08:20:57 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.775 08:20:57 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:08:05.775 08:20:57 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.775 08:20:57 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:08:05.775 08:20:57 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:05.775 08:20:57 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:05.775 08:20:57 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:05.775 08:20:57 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:05.775 08:20:57 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:05.775 08:20:57 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:05.775 08:20:57 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:05.775 08:20:57 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:05.775 08:20:57 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:05.775 08:20:57 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:08:05.775 08:20:57 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:08:05.775 08:20:57 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:05.775 08:20:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:05.775 08:20:57 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 1 -eq 0 ]] 00:08:05.775 08:20:57 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:05.775 08:20:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:05.775 08:20:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:05.775 08:20:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:05.775 ************************************ 00:08:05.775 START TEST nvmf_host_management 00:08:05.775 ************************************ 00:08:05.775 08:20:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:05.775 * Looking for test storage... 00:08:05.775 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:05.775 08:20:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:05.775 08:20:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:05.775 08:20:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:05.775 08:20:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:05.775 08:20:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:06.057 08:20:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:06.057 08:20:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:06.057 08:20:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:06.057 08:20:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:06.057 08:20:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:06.057 08:20:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:06.057 08:20:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:06.057 08:20:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:08:06.057 08:20:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:08:06.057 08:20:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:06.057 08:20:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:06.057 08:20:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:06.057 08:20:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:06.057 08:20:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:06.057 08:20:57 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:06.057 08:20:57 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:06.057 08:20:57 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:06.057 08:20:57 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.058 08:20:57 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.058 08:20:57 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.058 08:20:57 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:06.058 08:20:57 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.058 08:20:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:08:06.058 08:20:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:06.058 08:20:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:06.058 08:20:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:06.058 08:20:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:06.058 08:20:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:06.058 08:20:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:06.058 08:20:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:06.058 08:20:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:06.058 08:20:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:06.058 08:20:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:06.058 08:20:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:06.058 08:20:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:06.058 08:20:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:06.058 08:20:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:06.058 08:20:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:06.058 08:20:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:06.058 08:20:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:06.058 08:20:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:06.058 08:20:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:06.058 08:20:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:06.058 08:20:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:06.058 08:20:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:06.058 08:20:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:06.058 08:20:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:06.058 08:20:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:06.058 08:20:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:06.058 08:20:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:06.058 08:20:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:06.058 08:20:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:06.058 08:20:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:06.058 08:20:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:06.058 08:20:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:06.058 08:20:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:06.058 08:20:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:06.058 08:20:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:06.058 08:20:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:06.058 08:20:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:06.058 08:20:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:06.058 Cannot find device "nvmf_init_br" 00:08:06.058 08:20:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@154 -- # true 00:08:06.058 08:20:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:06.058 Cannot find device "nvmf_tgt_br" 00:08:06.058 08:20:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # true 00:08:06.058 08:20:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:06.058 Cannot find device "nvmf_tgt_br2" 00:08:06.058 08:20:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # true 00:08:06.058 08:20:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:06.058 Cannot find device "nvmf_init_br" 00:08:06.058 08:20:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@157 -- # true 00:08:06.058 08:20:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:06.058 Cannot find device "nvmf_tgt_br" 00:08:06.058 08:20:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # true 00:08:06.058 08:20:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:06.058 Cannot find device "nvmf_tgt_br2" 00:08:06.058 08:20:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # true 00:08:06.058 08:20:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:06.058 Cannot find device "nvmf_br" 00:08:06.058 08:20:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@160 -- # true 00:08:06.058 08:20:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:06.058 Cannot find device "nvmf_init_if" 00:08:06.058 08:20:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@161 -- # true 00:08:06.058 08:20:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:06.058 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:06.058 08:20:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:08:06.058 08:20:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:06.058 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:06.058 08:20:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:08:06.058 08:20:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:06.058 08:20:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:06.058 08:20:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:06.058 08:20:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:06.058 08:20:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:06.058 08:20:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:06.058 08:20:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:06.058 08:20:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:06.058 08:20:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:06.058 08:20:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:06.058 08:20:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:06.058 08:20:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:06.058 08:20:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:06.058 08:20:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:06.058 08:20:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:06.058 08:20:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:06.316 08:20:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:06.316 08:20:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:06.316 08:20:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:06.316 08:20:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:06.316 08:20:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:06.316 08:20:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:06.316 08:20:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:06.316 08:20:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:06.316 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:06.316 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms 00:08:06.316 00:08:06.316 --- 10.0.0.2 ping statistics --- 00:08:06.316 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:06.316 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:08:06.316 08:20:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:06.316 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:06.316 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:08:06.316 00:08:06.316 --- 10.0.0.3 ping statistics --- 00:08:06.316 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:06.316 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:08:06.316 08:20:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:06.316 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:06.316 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:08:06.316 00:08:06.316 --- 10.0.0.1 ping statistics --- 00:08:06.316 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:06.316 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:08:06.316 08:20:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:06.316 08:20:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@433 -- # return 0 00:08:06.316 08:20:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:06.316 08:20:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:06.316 08:20:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:06.316 08:20:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:06.316 08:20:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:06.316 08:20:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:06.316 08:20:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:06.316 08:20:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:06.316 08:20:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:06.316 08:20:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:06.316 08:20:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:06.316 08:20:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:06.316 08:20:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:06.316 08:20:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=65110 00:08:06.316 08:20:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 65110 00:08:06.316 08:20:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:06.316 08:20:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 65110 ']' 00:08:06.316 08:20:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:06.316 08:20:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:06.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:06.316 08:20:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:06.316 08:20:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:06.316 08:20:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:06.316 [2024-07-15 08:20:58.445982] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:06.316 [2024-07-15 08:20:58.446072] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:06.574 [2024-07-15 08:20:58.588937] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:06.574 [2024-07-15 08:20:58.718964] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:06.574 [2024-07-15 08:20:58.719036] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:06.574 [2024-07-15 08:20:58.719050] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:06.574 [2024-07-15 08:20:58.719060] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:06.574 [2024-07-15 08:20:58.719070] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:06.574 [2024-07-15 08:20:58.719238] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:06.574 [2024-07-15 08:20:58.719972] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:06.574 [2024-07-15 08:20:58.720114] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:08:06.574 [2024-07-15 08:20:58.720121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:06.832 [2024-07-15 08:20:58.775461] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:07.398 08:20:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:07.398 08:20:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:08:07.398 08:20:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:07.398 08:20:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:07.398 08:20:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:07.398 08:20:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:07.398 08:20:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:07.398 08:20:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.398 08:20:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:07.398 [2024-07-15 08:20:59.474639] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:07.398 08:20:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.399 08:20:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:07.399 08:20:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:07.399 08:20:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:07.399 08:20:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:08:07.399 08:20:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:07.399 08:20:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:07.399 08:20:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.399 08:20:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:07.399 Malloc0 00:08:07.399 [2024-07-15 08:20:59.552933] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:07.399 08:20:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.399 08:20:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:07.399 08:20:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:07.399 08:20:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:07.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:07.657 08:20:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=65170 00:08:07.657 08:20:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 65170 /var/tmp/bdevperf.sock 00:08:07.657 08:20:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 65170 ']' 00:08:07.657 08:20:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:07.657 08:20:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:07.657 08:20:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:07.657 08:20:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:07.657 08:20:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:07.657 08:20:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:07.657 08:20:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:07.657 08:20:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:08:07.657 08:20:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:08:07.657 08:20:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:07.657 08:20:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:07.657 { 00:08:07.657 "params": { 00:08:07.657 "name": "Nvme$subsystem", 00:08:07.657 "trtype": "$TEST_TRANSPORT", 00:08:07.657 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:07.657 "adrfam": "ipv4", 00:08:07.657 "trsvcid": "$NVMF_PORT", 00:08:07.657 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:07.657 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:07.657 "hdgst": ${hdgst:-false}, 00:08:07.657 "ddgst": ${ddgst:-false} 00:08:07.657 }, 00:08:07.657 "method": "bdev_nvme_attach_controller" 00:08:07.657 } 00:08:07.657 EOF 00:08:07.658 )") 00:08:07.658 08:20:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:08:07.658 08:20:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:08:07.658 08:20:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:08:07.658 08:20:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:07.658 "params": { 00:08:07.658 "name": "Nvme0", 00:08:07.658 "trtype": "tcp", 00:08:07.658 "traddr": "10.0.0.2", 00:08:07.658 "adrfam": "ipv4", 00:08:07.658 "trsvcid": "4420", 00:08:07.658 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:07.658 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:07.658 "hdgst": false, 00:08:07.658 "ddgst": false 00:08:07.658 }, 00:08:07.658 "method": "bdev_nvme_attach_controller" 00:08:07.658 }' 00:08:07.658 [2024-07-15 08:20:59.649486] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:07.658 [2024-07-15 08:20:59.649563] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65170 ] 00:08:07.658 [2024-07-15 08:20:59.810643] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.916 [2024-07-15 08:20:59.950289] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.916 [2024-07-15 08:21:00.015259] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:08.174 Running I/O for 10 seconds... 00:08:08.743 08:21:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:08.743 08:21:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:08:08.743 08:21:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:08.743 08:21:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.743 08:21:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:08.743 08:21:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.743 08:21:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:08.743 08:21:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:08.743 08:21:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:08.743 08:21:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:08.743 08:21:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:08.743 08:21:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:08.743 08:21:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:08.743 08:21:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:08.743 08:21:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:08.743 08:21:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:08.743 08:21:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.743 08:21:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:08.743 08:21:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.743 08:21:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=771 00:08:08.743 08:21:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 771 -ge 100 ']' 00:08:08.743 08:21:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:08.743 08:21:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:08.743 08:21:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:08.743 08:21:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:08.743 08:21:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.743 08:21:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:08.743 [2024-07-15 08:21:00.741938] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1657950 is same with the state(5) to be set 00:08:08.743 [2024-07-15 08:21:00.741992] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1657950 is same with the state(5) to be set 00:08:08.743 [2024-07-15 08:21:00.742005] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1657950 is same with the state(5) to be set 00:08:08.743 [2024-07-15 08:21:00.742014] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1657950 is same with the state(5) to be set 00:08:08.743 [2024-07-15 08:21:00.742023] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1657950 is same with the state(5) to be set 00:08:08.743 [2024-07-15 08:21:00.742032] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1657950 is same with the state(5) to be set 00:08:08.744 [2024-07-15 08:21:00.742040] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1657950 is same with the state(5) to be set 00:08:08.744 [2024-07-15 08:21:00.742049] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1657950 is same with the state(5) to be set 00:08:08.744 [2024-07-15 08:21:00.742057] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1657950 is same with the state(5) to be set 00:08:08.744 [2024-07-15 08:21:00.742066] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1657950 is same with the state(5) to be set 00:08:08.744 [2024-07-15 08:21:00.742074] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1657950 is same with the state(5) to be set 00:08:08.744 [2024-07-15 08:21:00.742082] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1657950 is same with the state(5) to be set 00:08:08.744 [2024-07-15 08:21:00.742090] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1657950 is same with the state(5) to be set 00:08:08.744 [2024-07-15 08:21:00.742098] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1657950 is same with the state(5) to be set 00:08:08.744 [2024-07-15 08:21:00.742106] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1657950 is same with the state(5) to be set 00:08:08.744 [2024-07-15 08:21:00.742115] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1657950 is same with the state(5) to be set 00:08:08.744 [2024-07-15 08:21:00.742123] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1657950 is same with the state(5) to be set 00:08:08.744 [2024-07-15 08:21:00.742131] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1657950 is same with the state(5) to be set 00:08:08.744 [2024-07-15 08:21:00.742139] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1657950 is same with the state(5) to be set 00:08:08.744 [2024-07-15 08:21:00.742146] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1657950 is same with the state(5) to be set 00:08:08.744 [2024-07-15 08:21:00.742155] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1657950 is same with the state(5) to be set 00:08:08.744 [2024-07-15 08:21:00.742162] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1657950 is same with the state(5) to be set 00:08:08.744 [2024-07-15 08:21:00.742170] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1657950 is same with the state(5) to be set 00:08:08.744 [2024-07-15 08:21:00.742179] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1657950 is same with the state(5) to be set 00:08:08.744 [2024-07-15 08:21:00.742188] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1657950 is same with the state(5) to be set 00:08:08.744 [2024-07-15 08:21:00.742196] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1657950 is same with the state(5) to be set 00:08:08.744 [2024-07-15 08:21:00.742204] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1657950 is same with the state(5) to be set 00:08:08.744 [2024-07-15 08:21:00.742212] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1657950 is same with the state(5) to be set 00:08:08.744 [2024-07-15 08:21:00.742220] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1657950 is same with the state(5) to be set 00:08:08.744 [2024-07-15 08:21:00.742228] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1657950 is same with the state(5) to be set 00:08:08.744 [2024-07-15 08:21:00.742236] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1657950 is same with the state(5) to be set 00:08:08.744 [2024-07-15 08:21:00.742244] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1657950 is same with the state(5) to be set 00:08:08.744 [2024-07-15 08:21:00.742261] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1657950 is same with the state(5) to be set 00:08:08.744 [2024-07-15 08:21:00.742270] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1657950 is same with the state(5) to be set 00:08:08.744 [2024-07-15 08:21:00.742278] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1657950 is same with the state(5) to be set 00:08:08.744 [2024-07-15 08:21:00.742286] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1657950 is same with the state(5) to be set 00:08:08.744 [2024-07-15 08:21:00.742295] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1657950 is same with the state(5) to be set 00:08:08.744 [2024-07-15 08:21:00.742303] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1657950 is same with the state(5) to be set 00:08:08.744 [2024-07-15 08:21:00.742312] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1657950 is same with the state(5) to be set 00:08:08.744 [2024-07-15 08:21:00.742320] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1657950 is same with the state(5) to be set 00:08:08.744 [2024-07-15 08:21:00.742328] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1657950 is same with the state(5) to be set 00:08:08.744 [2024-07-15 08:21:00.742336] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1657950 is same with the state(5) to be set 00:08:08.744 [2024-07-15 08:21:00.742344] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1657950 is same with the state(5) to be set 00:08:08.744 [2024-07-15 08:21:00.742352] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1657950 is same with the state(5) to be set 00:08:08.744 [2024-07-15 08:21:00.742360] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1657950 is same with the state(5) to be set 00:08:08.744 [2024-07-15 08:21:00.742368] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1657950 is same with the state(5) to be set 00:08:08.744 [2024-07-15 08:21:00.742376] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1657950 is same with the state(5) to be set 00:08:08.744 [2024-07-15 08:21:00.742384] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1657950 is same with the state(5) to be set 00:08:08.744 [2024-07-15 08:21:00.742392] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1657950 is same with the state(5) to be set 00:08:08.744 [2024-07-15 08:21:00.742400] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1657950 is same with the state(5) to be set 00:08:08.744 [2024-07-15 08:21:00.742408] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1657950 is same with the state(5) to be set 00:08:08.744 [2024-07-15 08:21:00.742416] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1657950 is same with the state(5) to be set 00:08:08.744 [2024-07-15 08:21:00.742424] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1657950 is same with the state(5) to be set 00:08:08.744 [2024-07-15 08:21:00.742432] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1657950 is same with the state(5) to be set 00:08:08.744 [2024-07-15 08:21:00.742440] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1657950 is same with the state(5) to be set 00:08:08.744 [2024-07-15 08:21:00.742448] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1657950 is same with the state(5) to be set 00:08:08.744 [2024-07-15 08:21:00.742456] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1657950 is same with the state(5) to be set 00:08:08.744 [2024-07-15 08:21:00.742464] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1657950 is same with the state(5) to be set 00:08:08.744 [2024-07-15 08:21:00.742472] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1657950 is same with the state(5) to be set 00:08:08.744 [2024-07-15 08:21:00.742480] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1657950 is same with the state(5) to be set 00:08:08.744 [2024-07-15 08:21:00.742488] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1657950 is same with the state(5) to be set 00:08:08.744 [2024-07-15 08:21:00.742496] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1657950 is same with the state(5) to be set 00:08:08.744 [2024-07-15 08:21:00.742504] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1657950 is same with the state(5) to be set 00:08:08.744 [2024-07-15 08:21:00.742596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.744 [2024-07-15 08:21:00.742626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.744 [2024-07-15 08:21:00.742650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:106624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.744 [2024-07-15 08:21:00.742661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.744 [2024-07-15 08:21:00.742674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:106752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.744 [2024-07-15 08:21:00.742684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.744 [2024-07-15 08:21:00.742696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:106880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.744 [2024-07-15 08:21:00.742705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.744 [2024-07-15 08:21:00.742729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:107008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.744 [2024-07-15 08:21:00.742742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.744 [2024-07-15 08:21:00.742755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:107136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.744 [2024-07-15 08:21:00.742765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.744 [2024-07-15 08:21:00.742776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:107264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.744 [2024-07-15 08:21:00.742787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.744 [2024-07-15 08:21:00.742798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:107392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.744 [2024-07-15 08:21:00.742808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.744 [2024-07-15 08:21:00.742820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:107520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.744 [2024-07-15 08:21:00.742829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.744 [2024-07-15 08:21:00.742841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:107648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.744 [2024-07-15 08:21:00.742851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.744 [2024-07-15 08:21:00.742863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:107776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.744 [2024-07-15 08:21:00.742872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.744 [2024-07-15 08:21:00.742884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:107904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.744 [2024-07-15 08:21:00.742894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.744 [2024-07-15 08:21:00.742905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:108032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.744 [2024-07-15 08:21:00.742915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.744 [2024-07-15 08:21:00.742927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:108160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.744 [2024-07-15 08:21:00.742942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.744 [2024-07-15 08:21:00.742955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:108288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.744 [2024-07-15 08:21:00.742965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.745 [2024-07-15 08:21:00.742976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:108416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.745 [2024-07-15 08:21:00.742986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.745 [2024-07-15 08:21:00.742998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:108544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.745 [2024-07-15 08:21:00.743008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.745 [2024-07-15 08:21:00.743020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:108672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.745 [2024-07-15 08:21:00.743030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.745 [2024-07-15 08:21:00.743042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:108800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.745 [2024-07-15 08:21:00.743051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.745 [2024-07-15 08:21:00.743064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:108928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.745 [2024-07-15 08:21:00.743073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.745 [2024-07-15 08:21:00.743085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:109056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.745 [2024-07-15 08:21:00.743095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.745 [2024-07-15 08:21:00.743107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:109184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.745 [2024-07-15 08:21:00.743116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.745 [2024-07-15 08:21:00.743128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:109312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.745 [2024-07-15 08:21:00.743138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.745 [2024-07-15 08:21:00.743163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:109440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.745 [2024-07-15 08:21:00.743174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.745 [2024-07-15 08:21:00.743185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:109568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.745 [2024-07-15 08:21:00.743195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.745 [2024-07-15 08:21:00.743207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:109696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.745 [2024-07-15 08:21:00.743217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.745 [2024-07-15 08:21:00.743229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:109824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.745 [2024-07-15 08:21:00.743239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.745 [2024-07-15 08:21:00.743251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:109952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.745 [2024-07-15 08:21:00.743261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.745 [2024-07-15 08:21:00.743272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:110080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.745 [2024-07-15 08:21:00.743282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.745 [2024-07-15 08:21:00.743294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:110208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.745 [2024-07-15 08:21:00.743309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.745 [2024-07-15 08:21:00.743321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:110336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.745 [2024-07-15 08:21:00.743331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.745 [2024-07-15 08:21:00.743343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:110464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.745 [2024-07-15 08:21:00.743353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.745 [2024-07-15 08:21:00.743365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:110592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.745 [2024-07-15 08:21:00.743386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.745 [2024-07-15 08:21:00.743400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:110720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.745 [2024-07-15 08:21:00.743410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.745 [2024-07-15 08:21:00.743422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:110848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.745 [2024-07-15 08:21:00.743432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.745 [2024-07-15 08:21:00.743444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:110976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.745 [2024-07-15 08:21:00.743454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.745 [2024-07-15 08:21:00.743466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:111104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.745 [2024-07-15 08:21:00.743476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.745 [2024-07-15 08:21:00.743488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:111232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.745 [2024-07-15 08:21:00.743497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.745 [2024-07-15 08:21:00.743509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:111360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.745 [2024-07-15 08:21:00.743519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.745 [2024-07-15 08:21:00.743531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:111488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.745 [2024-07-15 08:21:00.743541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.745 [2024-07-15 08:21:00.743552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:111616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.745 [2024-07-15 08:21:00.743562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.745 [2024-07-15 08:21:00.743574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:111744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.745 [2024-07-15 08:21:00.743584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.745 [2024-07-15 08:21:00.743596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:111872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.745 [2024-07-15 08:21:00.743605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.745 [2024-07-15 08:21:00.743618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:112000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.745 [2024-07-15 08:21:00.743628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.745 [2024-07-15 08:21:00.743640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:112128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.745 [2024-07-15 08:21:00.743649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.745 [2024-07-15 08:21:00.743661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:112256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.745 [2024-07-15 08:21:00.743676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.745 [2024-07-15 08:21:00.743688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:112384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.745 [2024-07-15 08:21:00.743698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.745 [2024-07-15 08:21:00.743710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:112512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.745 [2024-07-15 08:21:00.743730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.745 [2024-07-15 08:21:00.743743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:112640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.745 [2024-07-15 08:21:00.743754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.745 [2024-07-15 08:21:00.743766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:112768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.745 [2024-07-15 08:21:00.743776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.745 [2024-07-15 08:21:00.743789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:112896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.745 [2024-07-15 08:21:00.743799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.745 [2024-07-15 08:21:00.743810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:113024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.745 [2024-07-15 08:21:00.743820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.745 [2024-07-15 08:21:00.743832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:113152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.745 [2024-07-15 08:21:00.743842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.745 [2024-07-15 08:21:00.743853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:113280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.745 [2024-07-15 08:21:00.743863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.745 [2024-07-15 08:21:00.743875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:113408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.745 [2024-07-15 08:21:00.743884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.745 [2024-07-15 08:21:00.743896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:113536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.745 [2024-07-15 08:21:00.743906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.745 [2024-07-15 08:21:00.743918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:113664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.745 [2024-07-15 08:21:00.743928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.745 [2024-07-15 08:21:00.743940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:113792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.746 [2024-07-15 08:21:00.743950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.746 [2024-07-15 08:21:00.743961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:113920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.746 [2024-07-15 08:21:00.743971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.746 [2024-07-15 08:21:00.743982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:114048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.746 [2024-07-15 08:21:00.743992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.746 [2024-07-15 08:21:00.744003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:114176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.746 [2024-07-15 08:21:00.744013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.746 [2024-07-15 08:21:00.744025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:114304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.746 [2024-07-15 08:21:00.744041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.746 [2024-07-15 08:21:00.744053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:114432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.746 [2024-07-15 08:21:00.744062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.746 [2024-07-15 08:21:00.744075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:114560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.746 [2024-07-15 08:21:00.744085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.746 [2024-07-15 08:21:00.744096] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cabec0 is same with the state(5) to be set 00:08:08.746 [2024-07-15 08:21:00.744167] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1cabec0 was disconnected and freed. reset controller. 00:08:08.746 [2024-07-15 08:21:00.745315] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:08:08.746 task offset: 106496 on job bdev=Nvme0n1 fails 00:08:08.746 00:08:08.746 Latency(us) 00:08:08.746 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:08.746 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:08.746 Job: Nvme0n1 ended in about 0.61 seconds with error 00:08:08.746 Verification LBA range: start 0x0 length 0x400 00:08:08.746 Nvme0n1 : 0.61 1372.08 85.76 105.54 0.00 41947.64 3425.75 45041.11 00:08:08.746 =================================================================================================================== 00:08:08.746 Total : 1372.08 85.76 105.54 0.00 41947.64 3425.75 45041.11 00:08:08.746 08:21:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.746 08:21:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:08.746 08:21:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.746 08:21:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:08.746 [2024-07-15 08:21:00.747814] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:08.746 [2024-07-15 08:21:00.747854] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3d50 (9): Bad file descriptor 00:08:08.746 [2024-07-15 08:21:00.753473] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:08:08.746 [2024-07-15 08:21:00.753576] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:08:08.746 [2024-07-15 08:21:00.753602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.746 [2024-07-15 08:21:00.753619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:08:08.746 [2024-07-15 08:21:00.753630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:08:08.746 [2024-07-15 08:21:00.753640] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:08:08.746 [2024-07-15 08:21:00.753649] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ca3d50 00:08:08.746 [2024-07-15 08:21:00.753685] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3d50 (9): Bad file descriptor 00:08:08.746 [2024-07-15 08:21:00.753704] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:08:08.746 [2024-07-15 08:21:00.753714] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:08:08.746 [2024-07-15 08:21:00.753744] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:08:08.746 [2024-07-15 08:21:00.753764] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:08:08.746 08:21:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.746 08:21:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:09.687 08:21:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 65170 00:08:09.687 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (65170) - No such process 00:08:09.687 08:21:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:09.687 08:21:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:09.687 08:21:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:09.687 08:21:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:09.687 08:21:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:08:09.687 08:21:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:08:09.687 08:21:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:09.687 08:21:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:09.687 { 00:08:09.687 "params": { 00:08:09.687 "name": "Nvme$subsystem", 00:08:09.687 "trtype": "$TEST_TRANSPORT", 00:08:09.687 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:09.687 "adrfam": "ipv4", 00:08:09.687 "trsvcid": "$NVMF_PORT", 00:08:09.687 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:09.687 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:09.687 "hdgst": ${hdgst:-false}, 00:08:09.687 "ddgst": ${ddgst:-false} 00:08:09.687 }, 00:08:09.687 "method": "bdev_nvme_attach_controller" 00:08:09.687 } 00:08:09.687 EOF 00:08:09.687 )") 00:08:09.687 08:21:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:08:09.687 08:21:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:08:09.687 08:21:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:08:09.687 08:21:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:09.687 "params": { 00:08:09.687 "name": "Nvme0", 00:08:09.687 "trtype": "tcp", 00:08:09.687 "traddr": "10.0.0.2", 00:08:09.687 "adrfam": "ipv4", 00:08:09.687 "trsvcid": "4420", 00:08:09.687 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:09.687 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:09.687 "hdgst": false, 00:08:09.687 "ddgst": false 00:08:09.687 }, 00:08:09.687 "method": "bdev_nvme_attach_controller" 00:08:09.687 }' 00:08:09.687 [2024-07-15 08:21:01.821433] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:09.688 [2024-07-15 08:21:01.821770] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65208 ] 00:08:09.946 [2024-07-15 08:21:01.963957] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.946 [2024-07-15 08:21:02.093682] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.223 [2024-07-15 08:21:02.159177] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:10.223 Running I/O for 1 seconds... 00:08:11.170 00:08:11.170 Latency(us) 00:08:11.170 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:11.170 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:11.170 Verification LBA range: start 0x0 length 0x400 00:08:11.170 Nvme0n1 : 1.04 1481.21 92.58 0.00 0.00 42236.64 4379.00 45041.11 00:08:11.170 =================================================================================================================== 00:08:11.170 Total : 1481.21 92.58 0.00 0.00 42236.64 4379.00 45041.11 00:08:11.437 08:21:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:11.438 08:21:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:11.438 08:21:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:08:11.438 08:21:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:08:11.438 08:21:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:11.438 08:21:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:11.438 08:21:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:08:11.438 08:21:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:11.438 08:21:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:08:11.438 08:21:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:11.438 08:21:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:11.438 rmmod nvme_tcp 00:08:11.707 rmmod nvme_fabrics 00:08:11.707 rmmod nvme_keyring 00:08:11.707 08:21:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:11.707 08:21:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:08:11.707 08:21:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:08:11.707 08:21:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 65110 ']' 00:08:11.707 08:21:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 65110 00:08:11.707 08:21:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 65110 ']' 00:08:11.707 08:21:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 65110 00:08:11.707 08:21:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:08:11.707 08:21:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:11.707 08:21:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65110 00:08:11.707 killing process with pid 65110 00:08:11.708 08:21:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:11.708 08:21:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:11.708 08:21:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65110' 00:08:11.708 08:21:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 65110 00:08:11.708 08:21:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 65110 00:08:11.980 [2024-07-15 08:21:03.898747] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:11.980 08:21:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:11.980 08:21:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:11.980 08:21:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:11.980 08:21:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:11.980 08:21:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:11.980 08:21:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:11.980 08:21:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:11.980 08:21:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:11.980 08:21:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:11.980 08:21:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:11.980 00:08:11.980 real 0m6.109s 00:08:11.980 user 0m23.582s 00:08:11.980 sys 0m1.530s 00:08:11.980 08:21:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:11.980 ************************************ 00:08:11.980 END TEST nvmf_host_management 00:08:11.980 08:21:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:11.980 ************************************ 00:08:11.980 08:21:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:11.980 08:21:03 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:11.980 08:21:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:11.980 08:21:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:11.980 08:21:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:11.980 ************************************ 00:08:11.980 START TEST nvmf_lvol 00:08:11.980 ************************************ 00:08:11.980 08:21:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:11.980 * Looking for test storage... 00:08:11.980 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:11.980 08:21:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:11.980 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:11.980 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:11.980 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:11.980 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:11.980 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:11.980 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:11.980 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:11.980 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:11.980 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:11.980 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:11.980 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:11.980 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:08:11.980 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:08:11.980 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:11.980 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:11.980 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:11.980 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:11.980 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:11.980 08:21:04 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:11.980 08:21:04 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:11.980 08:21:04 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:11.980 08:21:04 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.980 08:21:04 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.980 08:21:04 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.980 08:21:04 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:11.980 08:21:04 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.980 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:08:11.980 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:11.980 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:11.981 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:11.981 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:11.981 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:11.981 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:11.981 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:11.981 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:11.981 08:21:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:11.981 08:21:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:11.981 08:21:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:11.981 08:21:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:11.981 08:21:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:11.981 08:21:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:11.981 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:11.981 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:11.981 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:11.981 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:11.981 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:11.981 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:11.981 08:21:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:11.981 08:21:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:11.981 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:11.981 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:11.981 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:11.981 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:11.981 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:11.981 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:11.981 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:11.981 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:11.981 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:11.981 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:11.981 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:11.981 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:11.981 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:11.981 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:11.981 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:11.981 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:11.981 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:11.981 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:11.981 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:11.981 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:11.981 Cannot find device "nvmf_tgt_br" 00:08:11.981 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # true 00:08:11.981 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:11.981 Cannot find device "nvmf_tgt_br2" 00:08:11.981 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # true 00:08:11.981 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:11.981 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:12.255 Cannot find device "nvmf_tgt_br" 00:08:12.255 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # true 00:08:12.255 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:12.255 Cannot find device "nvmf_tgt_br2" 00:08:12.255 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # true 00:08:12.255 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:12.255 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:12.255 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:12.255 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:12.255 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:08:12.255 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:12.255 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:12.255 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:08:12.255 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:12.255 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:12.255 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:12.255 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:12.255 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:12.255 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:12.255 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:12.255 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:12.255 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:12.255 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:12.255 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:12.255 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:12.255 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:12.255 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:12.255 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:12.255 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:12.255 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:12.255 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:12.255 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:12.255 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:12.255 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:12.255 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:12.255 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:12.255 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:12.255 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:12.255 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.106 ms 00:08:12.255 00:08:12.255 --- 10.0.0.2 ping statistics --- 00:08:12.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:12.255 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:08:12.255 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:12.255 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:12.255 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:08:12.255 00:08:12.255 --- 10.0.0.3 ping statistics --- 00:08:12.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:12.255 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:08:12.255 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:12.518 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:12.518 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:08:12.518 00:08:12.518 --- 10.0.0.1 ping statistics --- 00:08:12.518 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:12.518 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:08:12.518 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:12.518 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@433 -- # return 0 00:08:12.518 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:12.518 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:12.518 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:12.518 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:12.518 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:12.518 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:12.518 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:12.518 08:21:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:12.518 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:12.518 08:21:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:12.518 08:21:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:12.518 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=65424 00:08:12.518 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 65424 00:08:12.518 08:21:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:12.518 08:21:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 65424 ']' 00:08:12.518 08:21:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:12.518 08:21:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:12.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:12.518 08:21:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:12.518 08:21:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:12.518 08:21:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:12.518 [2024-07-15 08:21:04.503470] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:12.518 [2024-07-15 08:21:04.503584] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:12.518 [2024-07-15 08:21:04.644857] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:12.776 [2024-07-15 08:21:04.774185] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:12.776 [2024-07-15 08:21:04.774245] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:12.776 [2024-07-15 08:21:04.774259] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:12.776 [2024-07-15 08:21:04.774269] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:12.776 [2024-07-15 08:21:04.774278] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:12.776 [2024-07-15 08:21:04.774439] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:12.776 [2024-07-15 08:21:04.774777] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:12.776 [2024-07-15 08:21:04.774783] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.776 [2024-07-15 08:21:04.831036] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:13.342 08:21:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:13.342 08:21:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:08:13.342 08:21:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:13.342 08:21:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:13.342 08:21:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:13.601 08:21:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:13.601 08:21:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:13.859 [2024-07-15 08:21:05.785353] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:13.859 08:21:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:14.118 08:21:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:14.118 08:21:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:14.378 08:21:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:14.378 08:21:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:14.636 08:21:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:14.894 08:21:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=7ace564f-6648-44e8-81ba-7c5b82a9940a 00:08:14.894 08:21:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 7ace564f-6648-44e8-81ba-7c5b82a9940a lvol 20 00:08:15.153 08:21:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=e0f69d62-4faa-4fda-a8ba-7bbf57ea6770 00:08:15.153 08:21:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:15.412 08:21:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e0f69d62-4faa-4fda-a8ba-7bbf57ea6770 00:08:15.671 08:21:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:15.929 [2024-07-15 08:21:07.982543] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:15.929 08:21:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:16.186 08:21:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=65505 00:08:16.186 08:21:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:16.186 08:21:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:17.121 08:21:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot e0f69d62-4faa-4fda-a8ba-7bbf57ea6770 MY_SNAPSHOT 00:08:17.687 08:21:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=9eb0d1e1-de7a-4377-a80f-931dc1ccaa15 00:08:17.687 08:21:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize e0f69d62-4faa-4fda-a8ba-7bbf57ea6770 30 00:08:17.945 08:21:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 9eb0d1e1-de7a-4377-a80f-931dc1ccaa15 MY_CLONE 00:08:18.203 08:21:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=9ccae044-a964-4715-ae06-d63a90d72aa0 00:08:18.203 08:21:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 9ccae044-a964-4715-ae06-d63a90d72aa0 00:08:18.770 08:21:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 65505 00:08:26.904 Initializing NVMe Controllers 00:08:26.904 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:26.904 Controller IO queue size 128, less than required. 00:08:26.904 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:26.904 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:26.904 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:26.904 Initialization complete. Launching workers. 00:08:26.904 ======================================================== 00:08:26.904 Latency(us) 00:08:26.904 Device Information : IOPS MiB/s Average min max 00:08:26.904 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10511.20 41.06 12177.75 3723.63 93746.99 00:08:26.904 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10421.10 40.71 12289.84 3368.45 53236.74 00:08:26.904 ======================================================== 00:08:26.904 Total : 20932.30 81.77 12233.55 3368.45 93746.99 00:08:26.904 00:08:26.904 08:21:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:26.904 08:21:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete e0f69d62-4faa-4fda-a8ba-7bbf57ea6770 00:08:27.162 08:21:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7ace564f-6648-44e8-81ba-7c5b82a9940a 00:08:27.421 08:21:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:27.421 08:21:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:27.421 08:21:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:27.421 08:21:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:27.421 08:21:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:08:27.421 08:21:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:27.421 08:21:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:08:27.421 08:21:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:27.421 08:21:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:27.421 rmmod nvme_tcp 00:08:27.421 rmmod nvme_fabrics 00:08:27.421 rmmod nvme_keyring 00:08:27.421 08:21:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:27.421 08:21:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:08:27.421 08:21:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:08:27.421 08:21:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 65424 ']' 00:08:27.421 08:21:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 65424 00:08:27.421 08:21:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 65424 ']' 00:08:27.421 08:21:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 65424 00:08:27.421 08:21:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:08:27.421 08:21:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:27.421 08:21:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65424 00:08:27.421 killing process with pid 65424 00:08:27.421 08:21:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:27.421 08:21:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:27.421 08:21:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65424' 00:08:27.421 08:21:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 65424 00:08:27.421 08:21:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 65424 00:08:27.680 08:21:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:27.680 08:21:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:27.680 08:21:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:27.680 08:21:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:27.680 08:21:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:27.680 08:21:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:27.680 08:21:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:27.680 08:21:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:27.680 08:21:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:27.680 ************************************ 00:08:27.680 END TEST nvmf_lvol 00:08:27.680 ************************************ 00:08:27.680 00:08:27.680 real 0m15.841s 00:08:27.680 user 1m5.874s 00:08:27.680 sys 0m4.320s 00:08:27.680 08:21:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:27.680 08:21:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:27.938 08:21:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:27.938 08:21:19 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:27.938 08:21:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:27.938 08:21:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:27.938 08:21:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:27.938 ************************************ 00:08:27.938 START TEST nvmf_lvs_grow 00:08:27.938 ************************************ 00:08:27.938 08:21:19 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:27.938 * Looking for test storage... 00:08:27.938 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:27.938 08:21:19 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:27.938 08:21:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:27.938 08:21:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:27.938 08:21:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:27.938 08:21:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:27.938 08:21:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:27.938 08:21:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:27.938 08:21:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:27.938 08:21:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:27.938 08:21:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:27.939 08:21:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:27.939 08:21:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:27.939 08:21:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:08:27.939 08:21:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:08:27.939 08:21:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:27.939 08:21:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:27.939 08:21:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:27.939 08:21:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:27.939 08:21:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:27.939 08:21:19 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:27.939 08:21:19 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:27.939 08:21:19 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:27.939 08:21:19 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.939 08:21:19 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.939 08:21:19 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.939 08:21:19 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:27.939 08:21:19 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.939 08:21:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:08:27.939 08:21:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:27.939 08:21:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:27.939 08:21:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:27.939 08:21:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:27.939 08:21:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:27.939 08:21:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:27.939 08:21:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:27.939 08:21:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:27.939 08:21:19 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:27.939 08:21:19 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:27.939 08:21:19 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:27.939 08:21:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:27.939 08:21:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:27.939 08:21:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:27.939 08:21:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:27.939 08:21:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:27.939 08:21:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:27.939 08:21:19 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:27.939 08:21:19 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:27.939 08:21:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:27.939 08:21:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:27.939 08:21:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:27.939 08:21:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:27.939 08:21:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:27.939 08:21:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:27.939 08:21:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:27.939 08:21:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:27.939 08:21:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:27.939 08:21:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:27.939 08:21:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:27.939 08:21:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:27.939 08:21:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:27.939 08:21:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:27.939 08:21:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:27.939 08:21:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:27.939 08:21:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:27.939 08:21:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:27.939 08:21:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:27.939 08:21:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:27.939 Cannot find device "nvmf_tgt_br" 00:08:27.939 08:21:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # true 00:08:27.939 08:21:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:27.939 Cannot find device "nvmf_tgt_br2" 00:08:27.939 08:21:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # true 00:08:27.939 08:21:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:27.939 08:21:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:27.939 Cannot find device "nvmf_tgt_br" 00:08:27.939 08:21:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # true 00:08:27.939 08:21:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:27.939 Cannot find device "nvmf_tgt_br2" 00:08:27.939 08:21:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # true 00:08:27.939 08:21:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:28.199 08:21:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:28.199 08:21:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:28.199 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:28.199 08:21:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:08:28.199 08:21:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:28.199 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:28.199 08:21:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:08:28.199 08:21:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:28.199 08:21:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:28.199 08:21:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:28.199 08:21:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:28.199 08:21:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:28.199 08:21:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:28.199 08:21:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:28.199 08:21:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:28.199 08:21:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:28.199 08:21:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:28.199 08:21:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:28.199 08:21:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:28.199 08:21:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:28.199 08:21:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:28.199 08:21:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:28.199 08:21:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:28.199 08:21:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:28.199 08:21:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:28.199 08:21:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:28.199 08:21:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:28.199 08:21:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:28.199 08:21:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:28.199 08:21:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:28.199 08:21:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:28.199 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:28.199 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:08:28.199 00:08:28.199 --- 10.0.0.2 ping statistics --- 00:08:28.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:28.199 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:08:28.199 08:21:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:28.199 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:28.199 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:08:28.199 00:08:28.199 --- 10.0.0.3 ping statistics --- 00:08:28.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:28.199 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:08:28.199 08:21:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:28.199 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:28.199 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:08:28.199 00:08:28.199 --- 10.0.0.1 ping statistics --- 00:08:28.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:28.199 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:08:28.199 08:21:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:28.199 08:21:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@433 -- # return 0 00:08:28.199 08:21:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:28.199 08:21:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:28.200 08:21:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:28.200 08:21:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:28.200 08:21:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:28.200 08:21:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:28.200 08:21:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:28.200 08:21:20 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:28.200 08:21:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:28.200 08:21:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:28.200 08:21:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:28.200 08:21:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=65826 00:08:28.200 08:21:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 65826 00:08:28.200 08:21:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 65826 ']' 00:08:28.200 08:21:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:28.200 08:21:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:28.200 08:21:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:28.200 08:21:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:28.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:28.200 08:21:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:28.200 08:21:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:28.458 [2024-07-15 08:21:20.391839] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:28.458 [2024-07-15 08:21:20.391939] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:28.458 [2024-07-15 08:21:20.529385] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.716 [2024-07-15 08:21:20.657068] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:28.716 [2024-07-15 08:21:20.657139] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:28.716 [2024-07-15 08:21:20.657159] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:28.716 [2024-07-15 08:21:20.657170] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:28.716 [2024-07-15 08:21:20.657180] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:28.716 [2024-07-15 08:21:20.657217] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.716 [2024-07-15 08:21:20.714335] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:29.282 08:21:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:29.282 08:21:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:08:29.282 08:21:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:29.282 08:21:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:29.282 08:21:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:29.282 08:21:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:29.282 08:21:21 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:29.541 [2024-07-15 08:21:21.616017] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:29.541 08:21:21 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:29.541 08:21:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:29.541 08:21:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:29.541 08:21:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:29.541 ************************************ 00:08:29.541 START TEST lvs_grow_clean 00:08:29.541 ************************************ 00:08:29.541 08:21:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:08:29.541 08:21:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:29.541 08:21:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:29.541 08:21:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:29.541 08:21:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:29.541 08:21:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:29.541 08:21:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:29.541 08:21:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:29.541 08:21:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:29.541 08:21:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:29.798 08:21:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:29.799 08:21:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:30.056 08:21:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=96706397-f129-44ae-8688-7c4bfa0be93c 00:08:30.056 08:21:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 96706397-f129-44ae-8688-7c4bfa0be93c 00:08:30.056 08:21:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:30.315 08:21:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:30.315 08:21:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:30.315 08:21:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 96706397-f129-44ae-8688-7c4bfa0be93c lvol 150 00:08:30.590 08:21:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=7dc4f3a2-8fe6-49a0-a8bb-c49a1b205df5 00:08:30.590 08:21:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:30.590 08:21:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:30.847 [2024-07-15 08:21:22.877650] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:30.847 [2024-07-15 08:21:22.877751] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:30.847 true 00:08:30.847 08:21:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 96706397-f129-44ae-8688-7c4bfa0be93c 00:08:30.847 08:21:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:31.104 08:21:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:31.104 08:21:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:31.361 08:21:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7dc4f3a2-8fe6-49a0-a8bb-c49a1b205df5 00:08:31.618 08:21:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:31.876 [2024-07-15 08:21:24.006319] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:31.876 08:21:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:32.133 08:21:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=65908 00:08:32.133 08:21:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:32.133 08:21:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:32.133 08:21:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 65908 /var/tmp/bdevperf.sock 00:08:32.133 08:21:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 65908 ']' 00:08:32.133 08:21:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:32.133 08:21:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:32.133 08:21:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:32.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:32.133 08:21:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:32.133 08:21:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:32.133 [2024-07-15 08:21:24.301994] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:32.134 [2024-07-15 08:21:24.302084] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65908 ] 00:08:32.390 [2024-07-15 08:21:24.437327] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.390 [2024-07-15 08:21:24.553336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:32.647 [2024-07-15 08:21:24.606214] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:33.212 08:21:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:33.212 08:21:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:08:33.212 08:21:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:33.470 Nvme0n1 00:08:33.470 08:21:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:33.728 [ 00:08:33.728 { 00:08:33.728 "name": "Nvme0n1", 00:08:33.728 "aliases": [ 00:08:33.728 "7dc4f3a2-8fe6-49a0-a8bb-c49a1b205df5" 00:08:33.728 ], 00:08:33.728 "product_name": "NVMe disk", 00:08:33.728 "block_size": 4096, 00:08:33.728 "num_blocks": 38912, 00:08:33.728 "uuid": "7dc4f3a2-8fe6-49a0-a8bb-c49a1b205df5", 00:08:33.728 "assigned_rate_limits": { 00:08:33.728 "rw_ios_per_sec": 0, 00:08:33.728 "rw_mbytes_per_sec": 0, 00:08:33.728 "r_mbytes_per_sec": 0, 00:08:33.728 "w_mbytes_per_sec": 0 00:08:33.728 }, 00:08:33.728 "claimed": false, 00:08:33.728 "zoned": false, 00:08:33.728 "supported_io_types": { 00:08:33.728 "read": true, 00:08:33.728 "write": true, 00:08:33.728 "unmap": true, 00:08:33.728 "flush": true, 00:08:33.728 "reset": true, 00:08:33.728 "nvme_admin": true, 00:08:33.728 "nvme_io": true, 00:08:33.728 "nvme_io_md": false, 00:08:33.728 "write_zeroes": true, 00:08:33.728 "zcopy": false, 00:08:33.728 "get_zone_info": false, 00:08:33.728 "zone_management": false, 00:08:33.728 "zone_append": false, 00:08:33.728 "compare": true, 00:08:33.728 "compare_and_write": true, 00:08:33.728 "abort": true, 00:08:33.728 "seek_hole": false, 00:08:33.728 "seek_data": false, 00:08:33.728 "copy": true, 00:08:33.728 "nvme_iov_md": false 00:08:33.728 }, 00:08:33.728 "memory_domains": [ 00:08:33.728 { 00:08:33.728 "dma_device_id": "system", 00:08:33.728 "dma_device_type": 1 00:08:33.728 } 00:08:33.728 ], 00:08:33.728 "driver_specific": { 00:08:33.728 "nvme": [ 00:08:33.728 { 00:08:33.728 "trid": { 00:08:33.728 "trtype": "TCP", 00:08:33.728 "adrfam": "IPv4", 00:08:33.728 "traddr": "10.0.0.2", 00:08:33.728 "trsvcid": "4420", 00:08:33.728 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:33.728 }, 00:08:33.728 "ctrlr_data": { 00:08:33.728 "cntlid": 1, 00:08:33.728 "vendor_id": "0x8086", 00:08:33.728 "model_number": "SPDK bdev Controller", 00:08:33.728 "serial_number": "SPDK0", 00:08:33.728 "firmware_revision": "24.09", 00:08:33.728 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:33.728 "oacs": { 00:08:33.728 "security": 0, 00:08:33.728 "format": 0, 00:08:33.728 "firmware": 0, 00:08:33.728 "ns_manage": 0 00:08:33.728 }, 00:08:33.728 "multi_ctrlr": true, 00:08:33.728 "ana_reporting": false 00:08:33.728 }, 00:08:33.728 "vs": { 00:08:33.728 "nvme_version": "1.3" 00:08:33.728 }, 00:08:33.728 "ns_data": { 00:08:33.728 "id": 1, 00:08:33.728 "can_share": true 00:08:33.728 } 00:08:33.728 } 00:08:33.728 ], 00:08:33.728 "mp_policy": "active_passive" 00:08:33.728 } 00:08:33.728 } 00:08:33.728 ] 00:08:33.728 08:21:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=65932 00:08:33.728 08:21:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:33.728 08:21:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:33.986 Running I/O for 10 seconds... 00:08:34.920 Latency(us) 00:08:34.920 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:34.920 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:34.920 Nvme0n1 : 1.00 7620.00 29.77 0.00 0.00 0.00 0.00 0.00 00:08:34.920 =================================================================================================================== 00:08:34.920 Total : 7620.00 29.77 0.00 0.00 0.00 0.00 0.00 00:08:34.920 00:08:35.876 08:21:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 96706397-f129-44ae-8688-7c4bfa0be93c 00:08:35.876 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:35.876 Nvme0n1 : 2.00 7556.50 29.52 0.00 0.00 0.00 0.00 0.00 00:08:35.876 =================================================================================================================== 00:08:35.876 Total : 7556.50 29.52 0.00 0.00 0.00 0.00 0.00 00:08:35.876 00:08:36.134 true 00:08:36.134 08:21:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 96706397-f129-44ae-8688-7c4bfa0be93c 00:08:36.134 08:21:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:36.392 08:21:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:36.392 08:21:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:36.392 08:21:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 65932 00:08:36.960 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:36.960 Nvme0n1 : 3.00 7577.67 29.60 0.00 0.00 0.00 0.00 0.00 00:08:36.960 =================================================================================================================== 00:08:36.960 Total : 7577.67 29.60 0.00 0.00 0.00 0.00 0.00 00:08:36.960 00:08:37.892 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:37.892 Nvme0n1 : 4.00 7556.50 29.52 0.00 0.00 0.00 0.00 0.00 00:08:37.892 =================================================================================================================== 00:08:37.892 Total : 7556.50 29.52 0.00 0.00 0.00 0.00 0.00 00:08:37.892 00:08:38.844 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:38.844 Nvme0n1 : 5.00 7543.80 29.47 0.00 0.00 0.00 0.00 0.00 00:08:38.844 =================================================================================================================== 00:08:38.844 Total : 7543.80 29.47 0.00 0.00 0.00 0.00 0.00 00:08:38.844 00:08:39.801 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:39.801 Nvme0n1 : 6.00 7535.33 29.43 0.00 0.00 0.00 0.00 0.00 00:08:39.801 =================================================================================================================== 00:08:39.801 Total : 7535.33 29.43 0.00 0.00 0.00 0.00 0.00 00:08:39.801 00:08:41.178 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:41.178 Nvme0n1 : 7.00 7529.29 29.41 0.00 0.00 0.00 0.00 0.00 00:08:41.178 =================================================================================================================== 00:08:41.179 Total : 7529.29 29.41 0.00 0.00 0.00 0.00 0.00 00:08:41.179 00:08:42.115 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:42.115 Nvme0n1 : 8.00 7508.88 29.33 0.00 0.00 0.00 0.00 0.00 00:08:42.115 =================================================================================================================== 00:08:42.115 Total : 7508.88 29.33 0.00 0.00 0.00 0.00 0.00 00:08:42.115 00:08:43.049 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:43.049 Nvme0n1 : 9.00 7493.00 29.27 0.00 0.00 0.00 0.00 0.00 00:08:43.049 =================================================================================================================== 00:08:43.049 Total : 7493.00 29.27 0.00 0.00 0.00 0.00 0.00 00:08:43.049 00:08:44.006 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:44.006 Nvme0n1 : 10.00 7480.30 29.22 0.00 0.00 0.00 0.00 0.00 00:08:44.006 =================================================================================================================== 00:08:44.006 Total : 7480.30 29.22 0.00 0.00 0.00 0.00 0.00 00:08:44.006 00:08:44.006 00:08:44.006 Latency(us) 00:08:44.006 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:44.006 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:44.006 Nvme0n1 : 10.01 7481.90 29.23 0.00 0.00 17103.65 14834.97 35985.22 00:08:44.006 =================================================================================================================== 00:08:44.006 Total : 7481.90 29.23 0.00 0.00 17103.65 14834.97 35985.22 00:08:44.006 0 00:08:44.006 08:21:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 65908 00:08:44.006 08:21:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 65908 ']' 00:08:44.006 08:21:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 65908 00:08:44.006 08:21:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:08:44.006 08:21:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:44.006 08:21:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65908 00:08:44.006 08:21:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:44.006 08:21:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:44.006 killing process with pid 65908 00:08:44.006 Received shutdown signal, test time was about 10.000000 seconds 00:08:44.006 00:08:44.006 Latency(us) 00:08:44.006 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:44.006 =================================================================================================================== 00:08:44.006 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:44.006 08:21:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65908' 00:08:44.006 08:21:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 65908 00:08:44.007 08:21:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 65908 00:08:44.264 08:21:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:44.522 08:21:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:44.781 08:21:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 96706397-f129-44ae-8688-7c4bfa0be93c 00:08:44.781 08:21:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:45.038 08:21:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:45.038 08:21:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:45.038 08:21:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:45.296 [2024-07-15 08:21:37.317307] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:45.296 08:21:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 96706397-f129-44ae-8688-7c4bfa0be93c 00:08:45.296 08:21:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:08:45.296 08:21:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 96706397-f129-44ae-8688-7c4bfa0be93c 00:08:45.296 08:21:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:45.296 08:21:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:45.296 08:21:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:45.296 08:21:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:45.296 08:21:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:45.296 08:21:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:45.296 08:21:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:45.296 08:21:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:45.296 08:21:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 96706397-f129-44ae-8688-7c4bfa0be93c 00:08:45.555 request: 00:08:45.555 { 00:08:45.555 "uuid": "96706397-f129-44ae-8688-7c4bfa0be93c", 00:08:45.555 "method": "bdev_lvol_get_lvstores", 00:08:45.555 "req_id": 1 00:08:45.555 } 00:08:45.555 Got JSON-RPC error response 00:08:45.555 response: 00:08:45.555 { 00:08:45.555 "code": -19, 00:08:45.555 "message": "No such device" 00:08:45.555 } 00:08:45.555 08:21:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:08:45.555 08:21:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:45.555 08:21:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:45.555 08:21:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:45.555 08:21:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:45.812 aio_bdev 00:08:45.812 08:21:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 7dc4f3a2-8fe6-49a0-a8bb-c49a1b205df5 00:08:45.812 08:21:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=7dc4f3a2-8fe6-49a0-a8bb-c49a1b205df5 00:08:45.812 08:21:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:45.812 08:21:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:08:45.812 08:21:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:45.812 08:21:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:45.812 08:21:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:46.069 08:21:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7dc4f3a2-8fe6-49a0-a8bb-c49a1b205df5 -t 2000 00:08:46.328 [ 00:08:46.328 { 00:08:46.328 "name": "7dc4f3a2-8fe6-49a0-a8bb-c49a1b205df5", 00:08:46.328 "aliases": [ 00:08:46.328 "lvs/lvol" 00:08:46.328 ], 00:08:46.328 "product_name": "Logical Volume", 00:08:46.328 "block_size": 4096, 00:08:46.328 "num_blocks": 38912, 00:08:46.328 "uuid": "7dc4f3a2-8fe6-49a0-a8bb-c49a1b205df5", 00:08:46.328 "assigned_rate_limits": { 00:08:46.328 "rw_ios_per_sec": 0, 00:08:46.328 "rw_mbytes_per_sec": 0, 00:08:46.328 "r_mbytes_per_sec": 0, 00:08:46.328 "w_mbytes_per_sec": 0 00:08:46.328 }, 00:08:46.328 "claimed": false, 00:08:46.328 "zoned": false, 00:08:46.328 "supported_io_types": { 00:08:46.328 "read": true, 00:08:46.328 "write": true, 00:08:46.328 "unmap": true, 00:08:46.328 "flush": false, 00:08:46.328 "reset": true, 00:08:46.328 "nvme_admin": false, 00:08:46.328 "nvme_io": false, 00:08:46.328 "nvme_io_md": false, 00:08:46.328 "write_zeroes": true, 00:08:46.328 "zcopy": false, 00:08:46.328 "get_zone_info": false, 00:08:46.328 "zone_management": false, 00:08:46.328 "zone_append": false, 00:08:46.328 "compare": false, 00:08:46.328 "compare_and_write": false, 00:08:46.328 "abort": false, 00:08:46.328 "seek_hole": true, 00:08:46.328 "seek_data": true, 00:08:46.328 "copy": false, 00:08:46.328 "nvme_iov_md": false 00:08:46.328 }, 00:08:46.328 "driver_specific": { 00:08:46.328 "lvol": { 00:08:46.328 "lvol_store_uuid": "96706397-f129-44ae-8688-7c4bfa0be93c", 00:08:46.328 "base_bdev": "aio_bdev", 00:08:46.328 "thin_provision": false, 00:08:46.328 "num_allocated_clusters": 38, 00:08:46.328 "snapshot": false, 00:08:46.328 "clone": false, 00:08:46.328 "esnap_clone": false 00:08:46.328 } 00:08:46.328 } 00:08:46.328 } 00:08:46.328 ] 00:08:46.328 08:21:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:08:46.328 08:21:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:46.328 08:21:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 96706397-f129-44ae-8688-7c4bfa0be93c 00:08:46.586 08:21:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:46.586 08:21:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 96706397-f129-44ae-8688-7c4bfa0be93c 00:08:46.586 08:21:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:46.843 08:21:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:46.843 08:21:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 7dc4f3a2-8fe6-49a0-a8bb-c49a1b205df5 00:08:47.407 08:21:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 96706397-f129-44ae-8688-7c4bfa0be93c 00:08:47.407 08:21:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:47.665 08:21:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:48.230 00:08:48.230 real 0m18.481s 00:08:48.230 user 0m17.297s 00:08:48.230 sys 0m2.625s 00:08:48.230 08:21:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:48.230 ************************************ 00:08:48.230 END TEST lvs_grow_clean 00:08:48.230 ************************************ 00:08:48.230 08:21:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:48.230 08:21:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:08:48.230 08:21:40 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:48.230 08:21:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:48.230 08:21:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:48.230 08:21:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:48.230 ************************************ 00:08:48.230 START TEST lvs_grow_dirty 00:08:48.230 ************************************ 00:08:48.230 08:21:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:08:48.230 08:21:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:48.230 08:21:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:48.230 08:21:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:48.230 08:21:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:48.230 08:21:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:48.230 08:21:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:48.230 08:21:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:48.230 08:21:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:48.230 08:21:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:48.487 08:21:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:48.487 08:21:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:48.745 08:21:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=0e092e61-128f-4190-a38d-650b7aa86903 00:08:48.745 08:21:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:48.745 08:21:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0e092e61-128f-4190-a38d-650b7aa86903 00:08:49.003 08:21:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:49.003 08:21:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:49.003 08:21:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 0e092e61-128f-4190-a38d-650b7aa86903 lvol 150 00:08:49.261 08:21:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=4863d1cc-1834-49ea-a7d3-bd912e7bff34 00:08:49.261 08:21:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:49.261 08:21:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:49.519 [2024-07-15 08:21:41.569590] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:49.519 [2024-07-15 08:21:41.569683] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:49.519 true 00:08:49.519 08:21:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:49.519 08:21:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0e092e61-128f-4190-a38d-650b7aa86903 00:08:49.777 08:21:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:49.777 08:21:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:50.035 08:21:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 4863d1cc-1834-49ea-a7d3-bd912e7bff34 00:08:50.294 08:21:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:50.551 [2024-07-15 08:21:42.578142] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:50.551 08:21:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:50.809 08:21:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=66186 00:08:50.809 08:21:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:50.809 08:21:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 66186 /var/tmp/bdevperf.sock 00:08:50.809 08:21:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:50.809 08:21:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 66186 ']' 00:08:50.809 08:21:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:50.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:50.809 08:21:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:50.809 08:21:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:50.809 08:21:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:50.809 08:21:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:50.809 [2024-07-15 08:21:42.883862] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:50.809 [2024-07-15 08:21:42.883960] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66186 ] 00:08:51.068 [2024-07-15 08:21:43.022070] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.068 [2024-07-15 08:21:43.138485] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:51.068 [2024-07-15 08:21:43.190600] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:52.001 08:21:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:52.001 08:21:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:08:52.001 08:21:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:52.001 Nvme0n1 00:08:52.259 08:21:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:52.516 [ 00:08:52.516 { 00:08:52.516 "name": "Nvme0n1", 00:08:52.516 "aliases": [ 00:08:52.516 "4863d1cc-1834-49ea-a7d3-bd912e7bff34" 00:08:52.516 ], 00:08:52.516 "product_name": "NVMe disk", 00:08:52.516 "block_size": 4096, 00:08:52.516 "num_blocks": 38912, 00:08:52.516 "uuid": "4863d1cc-1834-49ea-a7d3-bd912e7bff34", 00:08:52.516 "assigned_rate_limits": { 00:08:52.516 "rw_ios_per_sec": 0, 00:08:52.516 "rw_mbytes_per_sec": 0, 00:08:52.516 "r_mbytes_per_sec": 0, 00:08:52.516 "w_mbytes_per_sec": 0 00:08:52.516 }, 00:08:52.516 "claimed": false, 00:08:52.516 "zoned": false, 00:08:52.516 "supported_io_types": { 00:08:52.516 "read": true, 00:08:52.516 "write": true, 00:08:52.516 "unmap": true, 00:08:52.516 "flush": true, 00:08:52.516 "reset": true, 00:08:52.516 "nvme_admin": true, 00:08:52.516 "nvme_io": true, 00:08:52.516 "nvme_io_md": false, 00:08:52.516 "write_zeroes": true, 00:08:52.516 "zcopy": false, 00:08:52.516 "get_zone_info": false, 00:08:52.516 "zone_management": false, 00:08:52.516 "zone_append": false, 00:08:52.516 "compare": true, 00:08:52.516 "compare_and_write": true, 00:08:52.516 "abort": true, 00:08:52.516 "seek_hole": false, 00:08:52.516 "seek_data": false, 00:08:52.516 "copy": true, 00:08:52.516 "nvme_iov_md": false 00:08:52.516 }, 00:08:52.516 "memory_domains": [ 00:08:52.516 { 00:08:52.516 "dma_device_id": "system", 00:08:52.516 "dma_device_type": 1 00:08:52.516 } 00:08:52.516 ], 00:08:52.516 "driver_specific": { 00:08:52.516 "nvme": [ 00:08:52.516 { 00:08:52.516 "trid": { 00:08:52.516 "trtype": "TCP", 00:08:52.516 "adrfam": "IPv4", 00:08:52.516 "traddr": "10.0.0.2", 00:08:52.516 "trsvcid": "4420", 00:08:52.516 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:52.516 }, 00:08:52.516 "ctrlr_data": { 00:08:52.516 "cntlid": 1, 00:08:52.516 "vendor_id": "0x8086", 00:08:52.516 "model_number": "SPDK bdev Controller", 00:08:52.516 "serial_number": "SPDK0", 00:08:52.516 "firmware_revision": "24.09", 00:08:52.516 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:52.516 "oacs": { 00:08:52.516 "security": 0, 00:08:52.516 "format": 0, 00:08:52.516 "firmware": 0, 00:08:52.516 "ns_manage": 0 00:08:52.516 }, 00:08:52.516 "multi_ctrlr": true, 00:08:52.516 "ana_reporting": false 00:08:52.516 }, 00:08:52.516 "vs": { 00:08:52.516 "nvme_version": "1.3" 00:08:52.516 }, 00:08:52.516 "ns_data": { 00:08:52.516 "id": 1, 00:08:52.516 "can_share": true 00:08:52.516 } 00:08:52.516 } 00:08:52.516 ], 00:08:52.517 "mp_policy": "active_passive" 00:08:52.517 } 00:08:52.517 } 00:08:52.517 ] 00:08:52.517 08:21:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=66208 00:08:52.517 08:21:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:52.517 08:21:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:52.517 Running I/O for 10 seconds... 00:08:53.451 Latency(us) 00:08:53.451 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:53.451 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:53.452 Nvme0n1 : 1.00 7747.00 30.26 0.00 0.00 0.00 0.00 0.00 00:08:53.452 =================================================================================================================== 00:08:53.452 Total : 7747.00 30.26 0.00 0.00 0.00 0.00 0.00 00:08:53.452 00:08:54.387 08:21:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 0e092e61-128f-4190-a38d-650b7aa86903 00:08:54.645 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:54.645 Nvme0n1 : 2.00 7620.00 29.77 0.00 0.00 0.00 0.00 0.00 00:08:54.645 =================================================================================================================== 00:08:54.645 Total : 7620.00 29.77 0.00 0.00 0.00 0.00 0.00 00:08:54.645 00:08:54.645 true 00:08:54.645 08:21:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:54.645 08:21:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0e092e61-128f-4190-a38d-650b7aa86903 00:08:55.211 08:21:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:55.211 08:21:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:55.211 08:21:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 66208 00:08:55.469 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:55.469 Nvme0n1 : 3.00 7577.67 29.60 0.00 0.00 0.00 0.00 0.00 00:08:55.469 =================================================================================================================== 00:08:55.469 Total : 7577.67 29.60 0.00 0.00 0.00 0.00 0.00 00:08:55.469 00:08:56.450 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:56.450 Nvme0n1 : 4.00 7524.75 29.39 0.00 0.00 0.00 0.00 0.00 00:08:56.450 =================================================================================================================== 00:08:56.450 Total : 7524.75 29.39 0.00 0.00 0.00 0.00 0.00 00:08:56.450 00:08:57.826 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:57.826 Nvme0n1 : 5.00 7442.20 29.07 0.00 0.00 0.00 0.00 0.00 00:08:57.826 =================================================================================================================== 00:08:57.826 Total : 7442.20 29.07 0.00 0.00 0.00 0.00 0.00 00:08:57.826 00:08:58.761 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:58.761 Nvme0n1 : 6.00 7450.67 29.10 0.00 0.00 0.00 0.00 0.00 00:08:58.761 =================================================================================================================== 00:08:58.761 Total : 7450.67 29.10 0.00 0.00 0.00 0.00 0.00 00:08:58.761 00:08:59.697 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:59.697 Nvme0n1 : 7.00 7280.71 28.44 0.00 0.00 0.00 0.00 0.00 00:08:59.697 =================================================================================================================== 00:08:59.697 Total : 7280.71 28.44 0.00 0.00 0.00 0.00 0.00 00:08:59.697 00:09:00.635 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:00.635 Nvme0n1 : 8.00 7243.75 28.30 0.00 0.00 0.00 0.00 0.00 00:09:00.635 =================================================================================================================== 00:09:00.635 Total : 7243.75 28.30 0.00 0.00 0.00 0.00 0.00 00:09:00.635 00:09:01.569 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:01.569 Nvme0n1 : 9.00 7243.22 28.29 0.00 0.00 0.00 0.00 0.00 00:09:01.569 =================================================================================================================== 00:09:01.569 Total : 7243.22 28.29 0.00 0.00 0.00 0.00 0.00 00:09:01.569 00:09:02.504 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:02.504 Nvme0n1 : 10.00 7230.10 28.24 0.00 0.00 0.00 0.00 0.00 00:09:02.504 =================================================================================================================== 00:09:02.504 Total : 7230.10 28.24 0.00 0.00 0.00 0.00 0.00 00:09:02.504 00:09:02.504 00:09:02.504 Latency(us) 00:09:02.504 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:02.504 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:02.504 Nvme0n1 : 10.01 7234.35 28.26 0.00 0.00 17686.41 12392.26 158239.65 00:09:02.504 =================================================================================================================== 00:09:02.504 Total : 7234.35 28.26 0.00 0.00 17686.41 12392.26 158239.65 00:09:02.504 0 00:09:02.505 08:21:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 66186 00:09:02.505 08:21:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 66186 ']' 00:09:02.505 08:21:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 66186 00:09:02.505 08:21:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:09:02.505 08:21:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:02.505 08:21:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66186 00:09:02.505 killing process with pid 66186 00:09:02.505 Received shutdown signal, test time was about 10.000000 seconds 00:09:02.505 00:09:02.505 Latency(us) 00:09:02.505 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:02.505 =================================================================================================================== 00:09:02.505 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:02.505 08:21:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:02.505 08:21:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:02.505 08:21:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66186' 00:09:02.505 08:21:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 66186 00:09:02.505 08:21:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 66186 00:09:02.763 08:21:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:03.022 08:21:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:03.281 08:21:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0e092e61-128f-4190-a38d-650b7aa86903 00:09:03.281 08:21:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:03.850 08:21:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:03.850 08:21:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:03.850 08:21:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 65826 00:09:03.850 08:21:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 65826 00:09:03.850 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 65826 Killed "${NVMF_APP[@]}" "$@" 00:09:03.850 08:21:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:03.850 08:21:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:03.850 08:21:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:03.850 08:21:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:03.850 08:21:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:03.850 08:21:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=66343 00:09:03.850 08:21:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:03.850 08:21:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 66343 00:09:03.850 08:21:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 66343 ']' 00:09:03.850 08:21:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:03.850 08:21:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:03.850 08:21:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:03.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:03.850 08:21:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:03.850 08:21:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:03.850 [2024-07-15 08:21:55.815589] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:03.850 [2024-07-15 08:21:55.815694] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:03.850 [2024-07-15 08:21:55.953227] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.109 [2024-07-15 08:21:56.071334] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:04.109 [2024-07-15 08:21:56.071402] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:04.109 [2024-07-15 08:21:56.071414] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:04.109 [2024-07-15 08:21:56.071422] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:04.109 [2024-07-15 08:21:56.071430] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:04.109 [2024-07-15 08:21:56.071463] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.109 [2024-07-15 08:21:56.125494] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:04.676 08:21:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:04.676 08:21:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:09:04.676 08:21:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:04.676 08:21:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:04.676 08:21:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:04.935 08:21:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:04.935 08:21:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:04.935 [2024-07-15 08:21:57.072393] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:04.935 [2024-07-15 08:21:57.072716] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:04.935 [2024-07-15 08:21:57.072992] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:05.193 08:21:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:05.193 08:21:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 4863d1cc-1834-49ea-a7d3-bd912e7bff34 00:09:05.193 08:21:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=4863d1cc-1834-49ea-a7d3-bd912e7bff34 00:09:05.193 08:21:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:05.193 08:21:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:09:05.193 08:21:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:05.193 08:21:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:05.193 08:21:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:05.452 08:21:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4863d1cc-1834-49ea-a7d3-bd912e7bff34 -t 2000 00:09:05.711 [ 00:09:05.711 { 00:09:05.711 "name": "4863d1cc-1834-49ea-a7d3-bd912e7bff34", 00:09:05.711 "aliases": [ 00:09:05.711 "lvs/lvol" 00:09:05.711 ], 00:09:05.711 "product_name": "Logical Volume", 00:09:05.711 "block_size": 4096, 00:09:05.711 "num_blocks": 38912, 00:09:05.711 "uuid": "4863d1cc-1834-49ea-a7d3-bd912e7bff34", 00:09:05.711 "assigned_rate_limits": { 00:09:05.711 "rw_ios_per_sec": 0, 00:09:05.711 "rw_mbytes_per_sec": 0, 00:09:05.711 "r_mbytes_per_sec": 0, 00:09:05.711 "w_mbytes_per_sec": 0 00:09:05.711 }, 00:09:05.711 "claimed": false, 00:09:05.711 "zoned": false, 00:09:05.711 "supported_io_types": { 00:09:05.711 "read": true, 00:09:05.711 "write": true, 00:09:05.711 "unmap": true, 00:09:05.711 "flush": false, 00:09:05.711 "reset": true, 00:09:05.711 "nvme_admin": false, 00:09:05.711 "nvme_io": false, 00:09:05.711 "nvme_io_md": false, 00:09:05.711 "write_zeroes": true, 00:09:05.711 "zcopy": false, 00:09:05.711 "get_zone_info": false, 00:09:05.711 "zone_management": false, 00:09:05.711 "zone_append": false, 00:09:05.711 "compare": false, 00:09:05.711 "compare_and_write": false, 00:09:05.711 "abort": false, 00:09:05.711 "seek_hole": true, 00:09:05.711 "seek_data": true, 00:09:05.711 "copy": false, 00:09:05.711 "nvme_iov_md": false 00:09:05.711 }, 00:09:05.711 "driver_specific": { 00:09:05.711 "lvol": { 00:09:05.711 "lvol_store_uuid": "0e092e61-128f-4190-a38d-650b7aa86903", 00:09:05.711 "base_bdev": "aio_bdev", 00:09:05.711 "thin_provision": false, 00:09:05.711 "num_allocated_clusters": 38, 00:09:05.711 "snapshot": false, 00:09:05.711 "clone": false, 00:09:05.711 "esnap_clone": false 00:09:05.711 } 00:09:05.711 } 00:09:05.711 } 00:09:05.711 ] 00:09:05.711 08:21:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:09:05.711 08:21:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0e092e61-128f-4190-a38d-650b7aa86903 00:09:05.711 08:21:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:05.970 08:21:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:05.970 08:21:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0e092e61-128f-4190-a38d-650b7aa86903 00:09:05.970 08:21:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:06.229 08:21:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:06.229 08:21:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:06.486 [2024-07-15 08:21:58.413795] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:06.486 08:21:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0e092e61-128f-4190-a38d-650b7aa86903 00:09:06.486 08:21:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:09:06.486 08:21:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0e092e61-128f-4190-a38d-650b7aa86903 00:09:06.486 08:21:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:06.486 08:21:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:06.486 08:21:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:06.486 08:21:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:06.486 08:21:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:06.486 08:21:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:06.486 08:21:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:06.486 08:21:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:06.487 08:21:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0e092e61-128f-4190-a38d-650b7aa86903 00:09:06.744 request: 00:09:06.744 { 00:09:06.744 "uuid": "0e092e61-128f-4190-a38d-650b7aa86903", 00:09:06.744 "method": "bdev_lvol_get_lvstores", 00:09:06.744 "req_id": 1 00:09:06.744 } 00:09:06.744 Got JSON-RPC error response 00:09:06.744 response: 00:09:06.744 { 00:09:06.744 "code": -19, 00:09:06.744 "message": "No such device" 00:09:06.744 } 00:09:06.744 08:21:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:09:06.744 08:21:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:06.744 08:21:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:06.744 08:21:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:06.744 08:21:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:07.002 aio_bdev 00:09:07.002 08:21:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 4863d1cc-1834-49ea-a7d3-bd912e7bff34 00:09:07.002 08:21:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=4863d1cc-1834-49ea-a7d3-bd912e7bff34 00:09:07.002 08:21:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:07.002 08:21:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:09:07.002 08:21:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:07.002 08:21:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:07.002 08:21:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:07.259 08:21:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4863d1cc-1834-49ea-a7d3-bd912e7bff34 -t 2000 00:09:07.517 [ 00:09:07.517 { 00:09:07.517 "name": "4863d1cc-1834-49ea-a7d3-bd912e7bff34", 00:09:07.517 "aliases": [ 00:09:07.517 "lvs/lvol" 00:09:07.517 ], 00:09:07.517 "product_name": "Logical Volume", 00:09:07.517 "block_size": 4096, 00:09:07.517 "num_blocks": 38912, 00:09:07.517 "uuid": "4863d1cc-1834-49ea-a7d3-bd912e7bff34", 00:09:07.517 "assigned_rate_limits": { 00:09:07.517 "rw_ios_per_sec": 0, 00:09:07.517 "rw_mbytes_per_sec": 0, 00:09:07.517 "r_mbytes_per_sec": 0, 00:09:07.517 "w_mbytes_per_sec": 0 00:09:07.517 }, 00:09:07.517 "claimed": false, 00:09:07.517 "zoned": false, 00:09:07.517 "supported_io_types": { 00:09:07.517 "read": true, 00:09:07.517 "write": true, 00:09:07.517 "unmap": true, 00:09:07.517 "flush": false, 00:09:07.517 "reset": true, 00:09:07.517 "nvme_admin": false, 00:09:07.517 "nvme_io": false, 00:09:07.517 "nvme_io_md": false, 00:09:07.517 "write_zeroes": true, 00:09:07.517 "zcopy": false, 00:09:07.517 "get_zone_info": false, 00:09:07.517 "zone_management": false, 00:09:07.517 "zone_append": false, 00:09:07.517 "compare": false, 00:09:07.517 "compare_and_write": false, 00:09:07.517 "abort": false, 00:09:07.517 "seek_hole": true, 00:09:07.517 "seek_data": true, 00:09:07.517 "copy": false, 00:09:07.517 "nvme_iov_md": false 00:09:07.517 }, 00:09:07.517 "driver_specific": { 00:09:07.517 "lvol": { 00:09:07.517 "lvol_store_uuid": "0e092e61-128f-4190-a38d-650b7aa86903", 00:09:07.517 "base_bdev": "aio_bdev", 00:09:07.517 "thin_provision": false, 00:09:07.517 "num_allocated_clusters": 38, 00:09:07.517 "snapshot": false, 00:09:07.517 "clone": false, 00:09:07.517 "esnap_clone": false 00:09:07.517 } 00:09:07.517 } 00:09:07.517 } 00:09:07.517 ] 00:09:07.517 08:21:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:09:07.517 08:21:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0e092e61-128f-4190-a38d-650b7aa86903 00:09:07.517 08:21:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:07.778 08:21:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:07.778 08:21:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0e092e61-128f-4190-a38d-650b7aa86903 00:09:07.778 08:21:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:08.035 08:21:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:08.035 08:21:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 4863d1cc-1834-49ea-a7d3-bd912e7bff34 00:09:08.292 08:22:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0e092e61-128f-4190-a38d-650b7aa86903 00:09:08.550 08:22:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:08.806 08:22:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:09.063 00:09:09.063 real 0m20.968s 00:09:09.063 user 0m44.420s 00:09:09.063 sys 0m7.941s 00:09:09.063 08:22:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:09.063 08:22:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:09.063 ************************************ 00:09:09.063 END TEST lvs_grow_dirty 00:09:09.063 ************************************ 00:09:09.063 08:22:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:09:09.063 08:22:01 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:09.063 08:22:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:09:09.063 08:22:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:09:09.063 08:22:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:09:09.063 08:22:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:09.063 08:22:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:09:09.063 08:22:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:09:09.063 08:22:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:09:09.063 08:22:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:09.063 nvmf_trace.0 00:09:09.320 08:22:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:09:09.320 08:22:01 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:09.320 08:22:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:09.320 08:22:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:09:09.320 08:22:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:09.320 08:22:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:09:09.320 08:22:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:09.320 08:22:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:09.320 rmmod nvme_tcp 00:09:09.320 rmmod nvme_fabrics 00:09:09.579 rmmod nvme_keyring 00:09:09.579 08:22:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:09.579 08:22:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:09:09.579 08:22:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:09:09.579 08:22:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 66343 ']' 00:09:09.579 08:22:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 66343 00:09:09.579 08:22:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 66343 ']' 00:09:09.579 08:22:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 66343 00:09:09.579 08:22:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:09:09.579 08:22:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:09.579 08:22:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66343 00:09:09.579 08:22:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:09.579 08:22:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:09.579 killing process with pid 66343 00:09:09.579 08:22:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66343' 00:09:09.579 08:22:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 66343 00:09:09.579 08:22:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 66343 00:09:09.837 08:22:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:09.837 08:22:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:09.837 08:22:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:09.837 08:22:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:09.837 08:22:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:09.837 08:22:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:09.837 08:22:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:09.837 08:22:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:09.837 08:22:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:09.837 ************************************ 00:09:09.837 END TEST nvmf_lvs_grow 00:09:09.837 ************************************ 00:09:09.837 00:09:09.837 real 0m41.927s 00:09:09.837 user 1m8.284s 00:09:09.837 sys 0m11.250s 00:09:09.837 08:22:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:09.837 08:22:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:09.837 08:22:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:09.837 08:22:01 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:09.837 08:22:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:09.837 08:22:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:09.837 08:22:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:09.837 ************************************ 00:09:09.837 START TEST nvmf_bdev_io_wait 00:09:09.837 ************************************ 00:09:09.837 08:22:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:09.837 * Looking for test storage... 00:09:09.837 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:09.838 08:22:01 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:09.838 08:22:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:09.838 08:22:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:09.838 08:22:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:09.838 08:22:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:09.838 08:22:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:09.838 08:22:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:09.838 08:22:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:09.838 08:22:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:09.838 08:22:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:09.838 08:22:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:09.838 08:22:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:09.838 08:22:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:09:09.838 08:22:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:09:09.838 08:22:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:09.838 08:22:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:09.838 08:22:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:09.838 08:22:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:09.838 08:22:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:09.838 08:22:01 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:09.838 08:22:01 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:09.838 08:22:01 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:09.838 08:22:01 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.838 08:22:01 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.838 08:22:01 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.838 08:22:01 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:09.838 08:22:01 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.838 08:22:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:09:09.838 08:22:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:09.838 08:22:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:09.838 08:22:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:09.838 08:22:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:09.838 08:22:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:09.838 08:22:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:09.838 08:22:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:09.838 08:22:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:09.838 08:22:01 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:09.838 08:22:01 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:09.838 08:22:01 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:09.838 08:22:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:09.838 08:22:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:09.838 08:22:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:09.838 08:22:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:09.838 08:22:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:09.838 08:22:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:09.838 08:22:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:09.838 08:22:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:09.838 08:22:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:09.838 08:22:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:09.838 08:22:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:09.838 08:22:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:09.838 08:22:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:09.838 08:22:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:09.838 08:22:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:09.838 08:22:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:09.838 08:22:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:09.838 08:22:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:09.838 08:22:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:09.838 08:22:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:09.838 08:22:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:09.838 08:22:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:09.838 08:22:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:09.838 08:22:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:09.838 08:22:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:09.838 08:22:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:09.838 08:22:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:09.838 08:22:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:10.097 Cannot find device "nvmf_tgt_br" 00:09:10.097 08:22:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # true 00:09:10.097 08:22:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:10.097 Cannot find device "nvmf_tgt_br2" 00:09:10.097 08:22:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # true 00:09:10.097 08:22:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:10.097 08:22:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:10.097 Cannot find device "nvmf_tgt_br" 00:09:10.097 08:22:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # true 00:09:10.097 08:22:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:10.097 Cannot find device "nvmf_tgt_br2" 00:09:10.097 08:22:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # true 00:09:10.097 08:22:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:10.097 08:22:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:10.097 08:22:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:10.097 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:10.097 08:22:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:09:10.097 08:22:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:10.097 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:10.097 08:22:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:09:10.097 08:22:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:10.097 08:22:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:10.097 08:22:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:10.097 08:22:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:10.097 08:22:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:10.097 08:22:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:10.097 08:22:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:10.097 08:22:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:10.097 08:22:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:10.097 08:22:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:10.097 08:22:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:10.097 08:22:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:10.097 08:22:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:10.097 08:22:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:10.097 08:22:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:10.097 08:22:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:10.097 08:22:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:10.097 08:22:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:10.097 08:22:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:10.355 08:22:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:10.355 08:22:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:10.355 08:22:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:10.355 08:22:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:10.355 08:22:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:10.355 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:10.355 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:09:10.355 00:09:10.355 --- 10.0.0.2 ping statistics --- 00:09:10.355 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:10.355 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:09:10.355 08:22:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:10.355 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:10.355 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:09:10.355 00:09:10.355 --- 10.0.0.3 ping statistics --- 00:09:10.355 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:10.355 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:09:10.355 08:22:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:10.355 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:10.355 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:09:10.355 00:09:10.355 --- 10.0.0.1 ping statistics --- 00:09:10.355 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:10.355 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:09:10.355 08:22:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:10.355 08:22:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@433 -- # return 0 00:09:10.355 08:22:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:10.355 08:22:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:10.355 08:22:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:10.355 08:22:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:10.355 08:22:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:10.355 08:22:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:10.355 08:22:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:10.355 08:22:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:10.355 08:22:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:10.355 08:22:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:10.355 08:22:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:10.355 08:22:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=66661 00:09:10.355 08:22:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:10.355 08:22:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 66661 00:09:10.355 08:22:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 66661 ']' 00:09:10.355 08:22:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:10.355 08:22:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:10.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:10.355 08:22:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:10.355 08:22:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:10.355 08:22:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:10.355 [2024-07-15 08:22:02.416508] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:10.355 [2024-07-15 08:22:02.416620] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:10.613 [2024-07-15 08:22:02.556934] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:10.613 [2024-07-15 08:22:02.690257] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:10.613 [2024-07-15 08:22:02.690591] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:10.613 [2024-07-15 08:22:02.690695] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:10.613 [2024-07-15 08:22:02.690846] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:10.613 [2024-07-15 08:22:02.690930] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:10.613 [2024-07-15 08:22:02.691142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:10.613 [2024-07-15 08:22:02.691293] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:10.613 [2024-07-15 08:22:02.691772] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:10.613 [2024-07-15 08:22:02.691786] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.549 08:22:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:11.549 08:22:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:09:11.549 08:22:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:11.549 08:22:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:11.549 08:22:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:11.549 08:22:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:11.549 08:22:03 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:11.549 08:22:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.549 08:22:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:11.549 08:22:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.549 08:22:03 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:11.549 08:22:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.549 08:22:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:11.549 [2024-07-15 08:22:03.513942] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:11.550 08:22:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.550 08:22:03 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:11.550 08:22:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.550 08:22:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:11.550 [2024-07-15 08:22:03.526142] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:11.550 08:22:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.550 08:22:03 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:11.550 08:22:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.550 08:22:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:11.550 Malloc0 00:09:11.550 08:22:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.550 08:22:03 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:11.550 08:22:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.550 08:22:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:11.550 08:22:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.550 08:22:03 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:11.550 08:22:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.550 08:22:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:11.550 08:22:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.550 08:22:03 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:11.550 08:22:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.550 08:22:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:11.550 [2024-07-15 08:22:03.591911] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:11.550 08:22:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.550 08:22:03 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=66696 00:09:11.550 08:22:03 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=66698 00:09:11.550 08:22:03 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:11.550 08:22:03 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:11.550 08:22:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:11.550 08:22:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:11.550 08:22:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:11.550 08:22:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:11.550 { 00:09:11.550 "params": { 00:09:11.550 "name": "Nvme$subsystem", 00:09:11.550 "trtype": "$TEST_TRANSPORT", 00:09:11.550 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:11.550 "adrfam": "ipv4", 00:09:11.550 "trsvcid": "$NVMF_PORT", 00:09:11.550 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:11.550 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:11.550 "hdgst": ${hdgst:-false}, 00:09:11.550 "ddgst": ${ddgst:-false} 00:09:11.550 }, 00:09:11.550 "method": "bdev_nvme_attach_controller" 00:09:11.550 } 00:09:11.550 EOF 00:09:11.550 )") 00:09:11.550 08:22:03 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=66700 00:09:11.550 08:22:03 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:11.550 08:22:03 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:11.550 08:22:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:11.550 08:22:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:11.550 08:22:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:11.550 08:22:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:11.550 08:22:03 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:11.550 08:22:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:11.550 { 00:09:11.550 "params": { 00:09:11.550 "name": "Nvme$subsystem", 00:09:11.550 "trtype": "$TEST_TRANSPORT", 00:09:11.550 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:11.550 "adrfam": "ipv4", 00:09:11.550 "trsvcid": "$NVMF_PORT", 00:09:11.550 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:11.550 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:11.550 "hdgst": ${hdgst:-false}, 00:09:11.550 "ddgst": ${ddgst:-false} 00:09:11.550 }, 00:09:11.550 "method": "bdev_nvme_attach_controller" 00:09:11.550 } 00:09:11.550 EOF 00:09:11.550 )") 00:09:11.550 08:22:03 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:11.550 08:22:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:11.550 08:22:03 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:11.550 08:22:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:11.550 08:22:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:11.550 08:22:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:11.550 08:22:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:11.550 08:22:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:11.550 { 00:09:11.550 "params": { 00:09:11.550 "name": "Nvme$subsystem", 00:09:11.550 "trtype": "$TEST_TRANSPORT", 00:09:11.550 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:11.550 "adrfam": "ipv4", 00:09:11.550 "trsvcid": "$NVMF_PORT", 00:09:11.550 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:11.550 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:11.550 "hdgst": ${hdgst:-false}, 00:09:11.550 "ddgst": ${ddgst:-false} 00:09:11.550 }, 00:09:11.550 "method": "bdev_nvme_attach_controller" 00:09:11.550 } 00:09:11.550 EOF 00:09:11.550 )") 00:09:11.550 08:22:03 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:11.550 08:22:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:11.550 08:22:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:11.550 08:22:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:11.550 08:22:03 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=66702 00:09:11.550 08:22:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:11.550 { 00:09:11.550 "params": { 00:09:11.550 "name": "Nvme$subsystem", 00:09:11.550 "trtype": "$TEST_TRANSPORT", 00:09:11.550 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:11.550 "adrfam": "ipv4", 00:09:11.550 "trsvcid": "$NVMF_PORT", 00:09:11.550 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:11.550 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:11.550 "hdgst": ${hdgst:-false}, 00:09:11.550 "ddgst": ${ddgst:-false} 00:09:11.550 }, 00:09:11.550 "method": "bdev_nvme_attach_controller" 00:09:11.550 } 00:09:11.550 EOF 00:09:11.550 )") 00:09:11.550 08:22:03 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:11.550 08:22:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:11.550 08:22:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:11.550 "params": { 00:09:11.550 "name": "Nvme1", 00:09:11.550 "trtype": "tcp", 00:09:11.550 "traddr": "10.0.0.2", 00:09:11.550 "adrfam": "ipv4", 00:09:11.550 "trsvcid": "4420", 00:09:11.550 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:11.550 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:11.550 "hdgst": false, 00:09:11.550 "ddgst": false 00:09:11.550 }, 00:09:11.550 "method": "bdev_nvme_attach_controller" 00:09:11.550 }' 00:09:11.550 08:22:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:11.550 08:22:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:11.550 08:22:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:11.550 08:22:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:11.550 08:22:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:11.550 "params": { 00:09:11.550 "name": "Nvme1", 00:09:11.550 "trtype": "tcp", 00:09:11.550 "traddr": "10.0.0.2", 00:09:11.550 "adrfam": "ipv4", 00:09:11.550 "trsvcid": "4420", 00:09:11.550 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:11.550 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:11.550 "hdgst": false, 00:09:11.550 "ddgst": false 00:09:11.550 }, 00:09:11.550 "method": "bdev_nvme_attach_controller" 00:09:11.550 }' 00:09:11.550 08:22:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:11.550 08:22:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:11.550 08:22:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:11.550 08:22:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:11.550 "params": { 00:09:11.550 "name": "Nvme1", 00:09:11.550 "trtype": "tcp", 00:09:11.550 "traddr": "10.0.0.2", 00:09:11.550 "adrfam": "ipv4", 00:09:11.550 "trsvcid": "4420", 00:09:11.550 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:11.550 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:11.550 "hdgst": false, 00:09:11.550 "ddgst": false 00:09:11.550 }, 00:09:11.550 "method": "bdev_nvme_attach_controller" 00:09:11.550 }' 00:09:11.550 08:22:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:11.550 08:22:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:11.550 "params": { 00:09:11.550 "name": "Nvme1", 00:09:11.550 "trtype": "tcp", 00:09:11.550 "traddr": "10.0.0.2", 00:09:11.550 "adrfam": "ipv4", 00:09:11.550 "trsvcid": "4420", 00:09:11.550 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:11.550 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:11.550 "hdgst": false, 00:09:11.550 "ddgst": false 00:09:11.550 }, 00:09:11.550 "method": "bdev_nvme_attach_controller" 00:09:11.550 }' 00:09:11.550 [2024-07-15 08:22:03.663348] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:11.550 [2024-07-15 08:22:03.663451] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:11.551 [2024-07-15 08:22:03.663617] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:11.551 [2024-07-15 08:22:03.663685] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:11.551 08:22:03 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 66696 00:09:11.551 [2024-07-15 08:22:03.688757] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:11.551 [2024-07-15 08:22:03.689149] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:11.551 [2024-07-15 08:22:03.714184] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:11.551 [2024-07-15 08:22:03.714305] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:09:11.808 [2024-07-15 08:22:03.876467] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.808 [2024-07-15 08:22:03.946635] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.074 [2024-07-15 08:22:03.981444] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:09:12.074 [2024-07-15 08:22:04.031431] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:12.074 [2024-07-15 08:22:04.055666] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.074 [2024-07-15 08:22:04.066237] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:09:12.074 [2024-07-15 08:22:04.102235] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.074 [2024-07-15 08:22:04.115166] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:12.074 Running I/O for 1 seconds... 00:09:12.074 [2024-07-15 08:22:04.168218] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:09:12.074 Running I/O for 1 seconds... 00:09:12.074 [2024-07-15 08:22:04.213819] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:09:12.074 [2024-07-15 08:22:04.218662] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:12.332 [2024-07-15 08:22:04.264863] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:12.332 Running I/O for 1 seconds... 00:09:12.332 Running I/O for 1 seconds... 00:09:13.269 00:09:13.269 Latency(us) 00:09:13.269 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:13.269 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:13.269 Nvme1n1 : 1.02 6893.94 26.93 0.00 0.00 18471.37 7208.96 36461.85 00:09:13.269 =================================================================================================================== 00:09:13.270 Total : 6893.94 26.93 0.00 0.00 18471.37 7208.96 36461.85 00:09:13.270 00:09:13.270 Latency(us) 00:09:13.270 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:13.270 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:13.270 Nvme1n1 : 1.00 167814.16 655.52 0.00 0.00 759.89 366.78 1184.12 00:09:13.270 =================================================================================================================== 00:09:13.270 Total : 167814.16 655.52 0.00 0.00 759.89 366.78 1184.12 00:09:13.270 00:09:13.270 Latency(us) 00:09:13.270 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:13.270 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:13.270 Nvme1n1 : 1.01 6351.90 24.81 0.00 0.00 20067.99 7119.59 43134.60 00:09:13.270 =================================================================================================================== 00:09:13.270 Total : 6351.90 24.81 0.00 0.00 20067.99 7119.59 43134.60 00:09:13.270 00:09:13.270 Latency(us) 00:09:13.270 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:13.270 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:13.270 Nvme1n1 : 1.01 7696.55 30.06 0.00 0.00 16473.57 9175.04 26095.24 00:09:13.270 =================================================================================================================== 00:09:13.270 Total : 7696.55 30.06 0.00 0.00 16473.57 9175.04 26095.24 00:09:13.270 08:22:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 66698 00:09:13.529 08:22:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 66700 00:09:13.529 08:22:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 66702 00:09:13.529 08:22:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:13.529 08:22:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:13.529 08:22:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:13.529 08:22:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.529 08:22:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:13.529 08:22:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:13.529 08:22:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:13.529 08:22:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:09:13.529 08:22:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:13.529 08:22:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:09:13.529 08:22:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:13.529 08:22:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:13.529 rmmod nvme_tcp 00:09:13.529 rmmod nvme_fabrics 00:09:13.529 rmmod nvme_keyring 00:09:13.529 08:22:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:13.788 08:22:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:09:13.788 08:22:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:09:13.788 08:22:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 66661 ']' 00:09:13.788 08:22:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 66661 00:09:13.788 08:22:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 66661 ']' 00:09:13.788 08:22:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 66661 00:09:13.788 08:22:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:09:13.788 08:22:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:13.788 08:22:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66661 00:09:13.788 killing process with pid 66661 00:09:13.788 08:22:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:13.788 08:22:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:13.788 08:22:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66661' 00:09:13.788 08:22:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 66661 00:09:13.788 08:22:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 66661 00:09:13.788 08:22:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:13.788 08:22:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:13.788 08:22:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:13.788 08:22:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:13.788 08:22:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:13.788 08:22:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:13.788 08:22:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:13.788 08:22:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:14.047 08:22:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:14.047 00:09:14.047 real 0m4.104s 00:09:14.047 user 0m18.024s 00:09:14.047 sys 0m2.211s 00:09:14.047 08:22:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:14.047 08:22:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:14.047 ************************************ 00:09:14.047 END TEST nvmf_bdev_io_wait 00:09:14.047 ************************************ 00:09:14.047 08:22:06 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:14.047 08:22:06 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:14.047 08:22:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:14.047 08:22:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:14.047 08:22:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:14.047 ************************************ 00:09:14.047 START TEST nvmf_queue_depth 00:09:14.047 ************************************ 00:09:14.047 08:22:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:14.047 * Looking for test storage... 00:09:14.047 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:14.047 08:22:06 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:14.047 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:14.047 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:14.047 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:14.047 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:14.047 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:14.047 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:14.047 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:14.047 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:14.047 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:14.047 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:14.047 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:14.047 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:09:14.047 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:09:14.047 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:14.047 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:14.047 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:14.047 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:14.047 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:14.047 08:22:06 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:14.047 08:22:06 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:14.047 08:22:06 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:14.047 08:22:06 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.047 08:22:06 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.047 08:22:06 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.047 08:22:06 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:14.047 08:22:06 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.047 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:09:14.047 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:14.047 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:14.047 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:14.047 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:14.047 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:14.047 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:14.047 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:14.047 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:14.047 08:22:06 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:14.047 08:22:06 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:14.047 08:22:06 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:14.047 08:22:06 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:14.047 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:14.047 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:14.047 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:14.047 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:14.047 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:14.047 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:14.047 08:22:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:14.047 08:22:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:14.047 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:14.047 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:14.047 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:14.047 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:14.047 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:14.047 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:14.047 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:14.047 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:14.047 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:14.047 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:14.047 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:14.047 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:14.047 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:14.047 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:14.047 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:14.047 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:14.047 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:14.047 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:14.047 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:14.047 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:14.047 Cannot find device "nvmf_tgt_br" 00:09:14.047 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # true 00:09:14.047 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:14.048 Cannot find device "nvmf_tgt_br2" 00:09:14.048 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # true 00:09:14.048 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:14.048 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:14.048 Cannot find device "nvmf_tgt_br" 00:09:14.048 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # true 00:09:14.048 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:14.048 Cannot find device "nvmf_tgt_br2" 00:09:14.048 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # true 00:09:14.048 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:14.305 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:14.305 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:14.305 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:14.305 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:09:14.305 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:14.305 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:14.305 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:09:14.305 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:14.305 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:14.305 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:14.305 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:14.305 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:14.305 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:14.305 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:14.305 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:14.305 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:14.305 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:14.305 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:14.305 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:14.305 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:14.305 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:14.305 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:14.305 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:14.305 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:14.305 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:14.305 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:14.305 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:14.305 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:14.305 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:14.305 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:14.305 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:14.305 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:14.305 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:09:14.305 00:09:14.305 --- 10.0.0.2 ping statistics --- 00:09:14.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:14.305 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:09:14.305 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:14.305 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:14.305 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:09:14.305 00:09:14.305 --- 10.0.0.3 ping statistics --- 00:09:14.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:14.305 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:09:14.305 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:14.305 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:14.305 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:09:14.305 00:09:14.305 --- 10.0.0.1 ping statistics --- 00:09:14.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:14.305 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:09:14.305 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:14.305 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@433 -- # return 0 00:09:14.305 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:14.305 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:14.305 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:14.305 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:14.305 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:14.305 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:14.305 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:14.563 08:22:06 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:14.563 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:14.563 08:22:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:14.563 08:22:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:14.563 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=66936 00:09:14.563 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 66936 00:09:14.563 08:22:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:14.563 08:22:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 66936 ']' 00:09:14.563 08:22:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:14.563 08:22:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:14.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:14.563 08:22:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:14.563 08:22:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:14.563 08:22:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:14.563 [2024-07-15 08:22:06.544223] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:14.563 [2024-07-15 08:22:06.544320] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:14.563 [2024-07-15 08:22:06.684009] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.820 [2024-07-15 08:22:06.803092] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:14.820 [2024-07-15 08:22:06.803153] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:14.820 [2024-07-15 08:22:06.803165] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:14.820 [2024-07-15 08:22:06.803174] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:14.820 [2024-07-15 08:22:06.803181] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:14.820 [2024-07-15 08:22:06.803213] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:14.820 [2024-07-15 08:22:06.857149] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:15.387 08:22:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:15.387 08:22:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:09:15.387 08:22:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:15.387 08:22:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:15.387 08:22:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:15.387 08:22:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:15.387 08:22:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:15.387 08:22:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:15.387 08:22:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:15.387 [2024-07-15 08:22:07.552946] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:15.387 08:22:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:15.387 08:22:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:15.387 08:22:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:15.387 08:22:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:15.645 Malloc0 00:09:15.645 08:22:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:15.645 08:22:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:15.645 08:22:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:15.645 08:22:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:15.645 08:22:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:15.645 08:22:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:15.645 08:22:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:15.645 08:22:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:15.645 08:22:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:15.645 08:22:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:15.645 08:22:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:15.645 08:22:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:15.645 [2024-07-15 08:22:07.616346] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:15.645 08:22:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:15.645 08:22:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=66968 00:09:15.645 08:22:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:15.645 08:22:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:15.645 08:22:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 66968 /var/tmp/bdevperf.sock 00:09:15.645 08:22:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 66968 ']' 00:09:15.645 08:22:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:15.645 08:22:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:15.645 08:22:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:15.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:15.645 08:22:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:15.645 08:22:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:15.645 [2024-07-15 08:22:07.667522] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:15.645 [2024-07-15 08:22:07.667623] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66968 ] 00:09:15.645 [2024-07-15 08:22:07.804504] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.903 [2024-07-15 08:22:07.935196] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.903 [2024-07-15 08:22:07.991729] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:16.514 08:22:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:16.514 08:22:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:09:16.514 08:22:08 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:16.514 08:22:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:16.514 08:22:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:16.772 NVMe0n1 00:09:16.772 08:22:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:16.772 08:22:08 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:16.772 Running I/O for 10 seconds... 00:09:28.985 00:09:28.985 Latency(us) 00:09:28.985 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:28.985 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:28.985 Verification LBA range: start 0x0 length 0x4000 00:09:28.985 NVMe0n1 : 10.09 7770.60 30.35 0.00 0.00 131068.41 26333.56 94848.47 00:09:28.985 =================================================================================================================== 00:09:28.985 Total : 7770.60 30.35 0.00 0.00 131068.41 26333.56 94848.47 00:09:28.985 0 00:09:28.985 08:22:18 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 66968 00:09:28.985 08:22:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 66968 ']' 00:09:28.985 08:22:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 66968 00:09:28.985 08:22:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:09:28.985 08:22:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:28.985 08:22:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66968 00:09:28.985 killing process with pid 66968 00:09:28.985 Received shutdown signal, test time was about 10.000000 seconds 00:09:28.985 00:09:28.985 Latency(us) 00:09:28.985 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:28.985 =================================================================================================================== 00:09:28.985 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:28.985 08:22:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:28.985 08:22:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:28.985 08:22:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66968' 00:09:28.985 08:22:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 66968 00:09:28.985 08:22:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 66968 00:09:28.985 08:22:19 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:28.985 08:22:19 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:28.985 08:22:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:28.985 08:22:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:09:28.985 08:22:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:28.985 08:22:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:09:28.985 08:22:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:28.985 08:22:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:28.985 rmmod nvme_tcp 00:09:28.985 rmmod nvme_fabrics 00:09:28.985 rmmod nvme_keyring 00:09:28.985 08:22:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:28.985 08:22:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:09:28.985 08:22:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:09:28.985 08:22:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 66936 ']' 00:09:28.985 08:22:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 66936 00:09:28.985 08:22:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 66936 ']' 00:09:28.985 08:22:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 66936 00:09:28.985 08:22:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:09:28.985 08:22:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:28.985 08:22:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66936 00:09:28.985 08:22:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:28.985 08:22:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:28.985 killing process with pid 66936 00:09:28.985 08:22:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66936' 00:09:28.985 08:22:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 66936 00:09:28.985 08:22:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 66936 00:09:28.985 08:22:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:28.985 08:22:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:28.985 08:22:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:28.985 08:22:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:28.985 08:22:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:28.985 08:22:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:28.985 08:22:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:28.985 08:22:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:28.985 08:22:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:28.985 00:09:28.985 real 0m13.611s 00:09:28.985 user 0m23.545s 00:09:28.985 sys 0m2.258s 00:09:28.985 08:22:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:28.985 08:22:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:28.985 ************************************ 00:09:28.985 END TEST nvmf_queue_depth 00:09:28.985 ************************************ 00:09:28.985 08:22:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:28.985 08:22:19 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:28.985 08:22:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:28.985 08:22:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:28.985 08:22:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:28.985 ************************************ 00:09:28.985 START TEST nvmf_target_multipath 00:09:28.985 ************************************ 00:09:28.985 08:22:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:28.985 * Looking for test storage... 00:09:28.985 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:28.985 08:22:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:28.985 08:22:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:28.985 08:22:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:28.985 08:22:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:28.985 08:22:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:28.985 08:22:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:28.985 08:22:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:28.985 08:22:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:28.985 08:22:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:28.985 08:22:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:28.985 08:22:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:28.985 08:22:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:28.985 08:22:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:09:28.985 08:22:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:09:28.985 08:22:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:28.985 08:22:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:28.985 08:22:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:28.985 08:22:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:28.985 08:22:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:28.985 08:22:19 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:28.985 08:22:19 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:28.985 08:22:19 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:28.986 08:22:19 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.986 08:22:19 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.986 08:22:19 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.986 08:22:19 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:28.986 08:22:19 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.986 08:22:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:09:28.986 08:22:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:28.986 08:22:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:28.986 08:22:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:28.986 08:22:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:28.986 08:22:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:28.986 08:22:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:28.986 08:22:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:28.986 08:22:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:28.986 08:22:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:28.986 08:22:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:28.986 08:22:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:28.986 08:22:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:28.986 08:22:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:28.986 08:22:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:28.986 08:22:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:28.986 08:22:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:28.986 08:22:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:28.986 08:22:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:28.986 08:22:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:28.986 08:22:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:28.986 08:22:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:28.986 08:22:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:28.986 08:22:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:28.986 08:22:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:28.986 08:22:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:28.986 08:22:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:28.986 08:22:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:28.986 08:22:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:28.986 08:22:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:28.986 08:22:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:28.986 08:22:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:28.986 08:22:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:28.986 08:22:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:28.986 08:22:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:28.986 08:22:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:28.986 08:22:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:28.986 08:22:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:28.986 08:22:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:28.986 08:22:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:28.986 08:22:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:28.986 08:22:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:28.986 Cannot find device "nvmf_tgt_br" 00:09:28.986 08:22:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # true 00:09:28.986 08:22:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:28.986 Cannot find device "nvmf_tgt_br2" 00:09:28.986 08:22:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # true 00:09:28.986 08:22:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:28.986 08:22:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:28.986 Cannot find device "nvmf_tgt_br" 00:09:28.986 08:22:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # true 00:09:28.986 08:22:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:28.986 Cannot find device "nvmf_tgt_br2" 00:09:28.986 08:22:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # true 00:09:28.986 08:22:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:28.986 08:22:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:28.986 08:22:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:28.986 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:28.986 08:22:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:09:28.986 08:22:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:28.986 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:28.986 08:22:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:09:28.986 08:22:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:28.986 08:22:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:28.986 08:22:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:28.986 08:22:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:28.986 08:22:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:28.986 08:22:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:28.986 08:22:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:28.986 08:22:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:28.986 08:22:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:28.986 08:22:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:28.986 08:22:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:28.986 08:22:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:28.986 08:22:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:28.986 08:22:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:28.986 08:22:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:28.986 08:22:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:28.986 08:22:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:28.986 08:22:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:28.986 08:22:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:28.986 08:22:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:28.986 08:22:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:28.986 08:22:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:28.986 08:22:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:28.986 08:22:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:28.986 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:28.986 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.106 ms 00:09:28.986 00:09:28.986 --- 10.0.0.2 ping statistics --- 00:09:28.986 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:28.986 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:09:28.986 08:22:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:28.986 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:28.986 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:09:28.986 00:09:28.986 --- 10.0.0.3 ping statistics --- 00:09:28.986 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:28.986 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:09:28.986 08:22:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:28.986 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:28.986 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:09:28.986 00:09:28.986 --- 10.0.0.1 ping statistics --- 00:09:28.986 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:28.986 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:09:28.986 08:22:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:28.986 08:22:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@433 -- # return 0 00:09:28.986 08:22:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:28.986 08:22:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:28.986 08:22:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:28.986 08:22:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:28.986 08:22:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:28.986 08:22:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:28.986 08:22:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:28.986 08:22:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:09:28.986 08:22:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:09:28.986 08:22:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:09:28.986 08:22:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:28.986 08:22:20 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:28.986 08:22:20 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:28.986 08:22:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@481 -- # nvmfpid=67288 00:09:28.986 08:22:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:28.986 08:22:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@482 -- # waitforlisten 67288 00:09:28.986 08:22:20 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@829 -- # '[' -z 67288 ']' 00:09:28.986 08:22:20 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:28.986 08:22:20 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:28.986 08:22:20 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:28.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:28.986 08:22:20 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:28.986 08:22:20 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:28.986 [2024-07-15 08:22:20.263458] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:28.986 [2024-07-15 08:22:20.263568] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:28.986 [2024-07-15 08:22:20.400373] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:28.986 [2024-07-15 08:22:20.520168] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:28.986 [2024-07-15 08:22:20.520233] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:28.986 [2024-07-15 08:22:20.520245] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:28.986 [2024-07-15 08:22:20.520254] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:28.986 [2024-07-15 08:22:20.520261] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:28.986 [2024-07-15 08:22:20.520362] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:28.986 [2024-07-15 08:22:20.520834] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:28.986 [2024-07-15 08:22:20.521386] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.986 [2024-07-15 08:22:20.521378] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:28.986 [2024-07-15 08:22:20.574228] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:29.243 08:22:21 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:29.243 08:22:21 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@862 -- # return 0 00:09:29.243 08:22:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:29.243 08:22:21 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:29.243 08:22:21 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:29.243 08:22:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:29.243 08:22:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:29.501 [2024-07-15 08:22:21.491136] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:29.501 08:22:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:09:29.758 Malloc0 00:09:29.758 08:22:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:09:30.014 08:22:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:30.271 08:22:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:30.528 [2024-07-15 08:22:22.592915] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:30.528 08:22:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:30.786 [2024-07-15 08:22:22.817123] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:30.786 08:22:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --hostid=cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:09:31.043 08:22:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --hostid=cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:09:31.043 08:22:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:09:31.043 08:22:23 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1198 -- # local i=0 00:09:31.043 08:22:23 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:31.043 08:22:23 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:31.043 08:22:23 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # sleep 2 00:09:32.946 08:22:25 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:32.946 08:22:25 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:32.946 08:22:25 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:32.946 08:22:25 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:32.946 08:22:25 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:32.946 08:22:25 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # return 0 00:09:32.946 08:22:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:09:32.946 08:22:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:09:32.946 08:22:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:09:32.946 08:22:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:09:32.946 08:22:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:09:32.946 08:22:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:09:32.946 08:22:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:09:32.946 08:22:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:09:32.946 08:22:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:09:32.946 08:22:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:09:32.946 08:22:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:09:32.946 08:22:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:09:32.946 08:22:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:09:32.946 08:22:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:09:33.205 08:22:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:09:33.205 08:22:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:33.205 08:22:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:33.205 08:22:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:33.205 08:22:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:33.205 08:22:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:09:33.205 08:22:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:09:33.205 08:22:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:33.205 08:22:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:33.205 08:22:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:33.205 08:22:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:33.205 08:22:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:09:33.205 08:22:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=67383 00:09:33.205 08:22:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:09:33.205 08:22:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:09:33.205 [global] 00:09:33.205 thread=1 00:09:33.205 invalidate=1 00:09:33.205 rw=randrw 00:09:33.205 time_based=1 00:09:33.205 runtime=6 00:09:33.205 ioengine=libaio 00:09:33.205 direct=1 00:09:33.205 bs=4096 00:09:33.205 iodepth=128 00:09:33.205 norandommap=0 00:09:33.205 numjobs=1 00:09:33.205 00:09:33.205 verify_dump=1 00:09:33.205 verify_backlog=512 00:09:33.205 verify_state_save=0 00:09:33.205 do_verify=1 00:09:33.205 verify=crc32c-intel 00:09:33.205 [job0] 00:09:33.205 filename=/dev/nvme0n1 00:09:33.205 Could not set queue depth (nvme0n1) 00:09:33.205 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:33.205 fio-3.35 00:09:33.205 Starting 1 thread 00:09:34.164 08:22:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:09:34.422 08:22:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:09:34.680 08:22:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:09:34.680 08:22:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:09:34.680 08:22:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:34.680 08:22:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:34.680 08:22:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:34.680 08:22:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:34.680 08:22:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:09:34.680 08:22:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:09:34.680 08:22:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:34.680 08:22:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:34.680 08:22:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:34.680 08:22:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:34.680 08:22:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:09:34.938 08:22:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:09:35.197 08:22:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:09:35.197 08:22:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:09:35.197 08:22:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:35.197 08:22:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:35.197 08:22:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:35.197 08:22:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:35.197 08:22:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:09:35.197 08:22:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:09:35.197 08:22:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:35.197 08:22:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:35.197 08:22:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:35.197 08:22:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:35.197 08:22:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 67383 00:09:39.383 00:09:39.383 job0: (groupid=0, jobs=1): err= 0: pid=67404: Mon Jul 15 08:22:31 2024 00:09:39.383 read: IOPS=10.2k, BW=39.8MiB/s (41.7MB/s)(239MiB/6002msec) 00:09:39.383 slat (usec): min=6, max=7571, avg=57.25, stdev=228.56 00:09:39.383 clat (usec): min=1210, max=18622, avg=8469.39, stdev=1592.04 00:09:39.383 lat (usec): min=1731, max=18638, avg=8526.64, stdev=1597.08 00:09:39.383 clat percentiles (usec): 00:09:39.383 | 1.00th=[ 4359], 5.00th=[ 6521], 10.00th=[ 7177], 20.00th=[ 7635], 00:09:39.384 | 30.00th=[ 7898], 40.00th=[ 8094], 50.00th=[ 8225], 60.00th=[ 8455], 00:09:39.384 | 70.00th=[ 8586], 80.00th=[ 8979], 90.00th=[10421], 95.00th=[12125], 00:09:39.384 | 99.00th=[13304], 99.50th=[13566], 99.90th=[17171], 99.95th=[17433], 00:09:39.384 | 99.99th=[18482] 00:09:39.384 bw ( KiB/s): min= 9264, max=27152, per=53.15%, avg=21655.18, stdev=5235.39, samples=11 00:09:39.384 iops : min= 2316, max= 6788, avg=5413.73, stdev=1308.81, samples=11 00:09:39.384 write: IOPS=6090, BW=23.8MiB/s (24.9MB/s)(129MiB/5418msec); 0 zone resets 00:09:39.384 slat (usec): min=12, max=2876, avg=66.02, stdev=161.23 00:09:39.384 clat (usec): min=2530, max=18366, avg=7381.04, stdev=1373.65 00:09:39.384 lat (usec): min=2556, max=18397, avg=7447.06, stdev=1378.41 00:09:39.384 clat percentiles (usec): 00:09:39.384 | 1.00th=[ 3458], 5.00th=[ 4424], 10.00th=[ 5932], 20.00th=[ 6783], 00:09:39.384 | 30.00th=[ 7111], 40.00th=[ 7308], 50.00th=[ 7504], 60.00th=[ 7635], 00:09:39.384 | 70.00th=[ 7832], 80.00th=[ 8029], 90.00th=[ 8455], 95.00th=[ 9372], 00:09:39.384 | 99.00th=[11731], 99.50th=[12125], 99.90th=[13566], 99.95th=[14353], 00:09:39.384 | 99.99th=[16188] 00:09:39.384 bw ( KiB/s): min= 9656, max=27048, per=88.92%, avg=21663.27, stdev=4877.93, samples=11 00:09:39.384 iops : min= 2414, max= 6762, avg=5415.82, stdev=1219.48, samples=11 00:09:39.384 lat (msec) : 2=0.03%, 4=1.34%, 10=90.33%, 20=8.30% 00:09:39.384 cpu : usr=5.73%, sys=21.08%, ctx=5488, majf=0, minf=84 00:09:39.384 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:09:39.384 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:39.384 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:39.384 issued rwts: total=61135,32999,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:39.384 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:39.384 00:09:39.384 Run status group 0 (all jobs): 00:09:39.384 READ: bw=39.8MiB/s (41.7MB/s), 39.8MiB/s-39.8MiB/s (41.7MB/s-41.7MB/s), io=239MiB (250MB), run=6002-6002msec 00:09:39.384 WRITE: bw=23.8MiB/s (24.9MB/s), 23.8MiB/s-23.8MiB/s (24.9MB/s-24.9MB/s), io=129MiB (135MB), run=5418-5418msec 00:09:39.384 00:09:39.384 Disk stats (read/write): 00:09:39.384 nvme0n1: ios=60222/32487, merge=0/0, ticks=490271/225609, in_queue=715880, util=98.60% 00:09:39.384 08:22:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:09:39.642 08:22:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:09:39.900 08:22:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:09:39.900 08:22:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:09:39.900 08:22:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:39.900 08:22:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:39.900 08:22:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:39.900 08:22:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:39.900 08:22:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:09:39.900 08:22:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:09:39.900 08:22:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:39.900 08:22:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:39.900 08:22:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:39.900 08:22:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:39.900 08:22:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:09:39.900 08:22:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:09:39.900 08:22:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=67484 00:09:39.901 08:22:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:09:39.901 [global] 00:09:39.901 thread=1 00:09:39.901 invalidate=1 00:09:39.901 rw=randrw 00:09:39.901 time_based=1 00:09:39.901 runtime=6 00:09:39.901 ioengine=libaio 00:09:39.901 direct=1 00:09:39.901 bs=4096 00:09:39.901 iodepth=128 00:09:39.901 norandommap=0 00:09:39.901 numjobs=1 00:09:39.901 00:09:39.901 verify_dump=1 00:09:39.901 verify_backlog=512 00:09:39.901 verify_state_save=0 00:09:39.901 do_verify=1 00:09:39.901 verify=crc32c-intel 00:09:39.901 [job0] 00:09:39.901 filename=/dev/nvme0n1 00:09:39.901 Could not set queue depth (nvme0n1) 00:09:40.158 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:40.158 fio-3.35 00:09:40.158 Starting 1 thread 00:09:41.093 08:22:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:09:41.093 08:22:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:09:41.657 08:22:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:09:41.657 08:22:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:09:41.657 08:22:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:41.657 08:22:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:41.657 08:22:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:41.657 08:22:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:41.657 08:22:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:09:41.657 08:22:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:09:41.657 08:22:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:41.657 08:22:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:41.657 08:22:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:41.657 08:22:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:41.657 08:22:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:09:41.914 08:22:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:09:42.172 08:22:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:09:42.172 08:22:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:09:42.172 08:22:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:42.172 08:22:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:42.172 08:22:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:42.172 08:22:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:42.172 08:22:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:09:42.172 08:22:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:09:42.172 08:22:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:42.172 08:22:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:42.172 08:22:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:42.172 08:22:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:42.172 08:22:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 67484 00:09:46.358 00:09:46.358 job0: (groupid=0, jobs=1): err= 0: pid=67505: Mon Jul 15 08:22:38 2024 00:09:46.358 read: IOPS=11.4k, BW=44.7MiB/s (46.8MB/s)(268MiB/6007msec) 00:09:46.358 slat (usec): min=2, max=8675, avg=43.69, stdev=201.72 00:09:46.358 clat (usec): min=255, max=17763, avg=7693.00, stdev=2001.70 00:09:46.358 lat (usec): min=284, max=17786, avg=7736.69, stdev=2018.28 00:09:46.358 clat percentiles (usec): 00:09:46.358 | 1.00th=[ 2966], 5.00th=[ 4228], 10.00th=[ 4752], 20.00th=[ 5932], 00:09:46.358 | 30.00th=[ 7111], 40.00th=[ 7701], 50.00th=[ 8029], 60.00th=[ 8291], 00:09:46.358 | 70.00th=[ 8586], 80.00th=[ 8848], 90.00th=[ 9503], 95.00th=[11600], 00:09:46.358 | 99.00th=[13173], 99.50th=[13435], 99.90th=[13829], 99.95th=[13960], 00:09:46.358 | 99.99th=[14746] 00:09:46.358 bw ( KiB/s): min=10008, max=42600, per=52.87%, avg=24188.36, stdev=9623.05, samples=11 00:09:46.358 iops : min= 2502, max=10650, avg=6047.09, stdev=2405.76, samples=11 00:09:46.358 write: IOPS=6810, BW=26.6MiB/s (27.9MB/s)(141MiB/5314msec); 0 zone resets 00:09:46.358 slat (usec): min=3, max=2174, avg=55.00, stdev=140.79 00:09:46.358 clat (usec): min=1100, max=14676, avg=6467.57, stdev=1851.39 00:09:46.358 lat (usec): min=1119, max=14695, avg=6522.57, stdev=1867.63 00:09:46.358 clat percentiles (usec): 00:09:46.358 | 1.00th=[ 2540], 5.00th=[ 3294], 10.00th=[ 3785], 20.00th=[ 4424], 00:09:46.358 | 30.00th=[ 5145], 40.00th=[ 6652], 50.00th=[ 7177], 60.00th=[ 7439], 00:09:46.358 | 70.00th=[ 7701], 80.00th=[ 7963], 90.00th=[ 8225], 95.00th=[ 8586], 00:09:46.358 | 99.00th=[11076], 99.50th=[11731], 99.90th=[13042], 99.95th=[13566], 00:09:46.358 | 99.99th=[14353] 00:09:46.358 bw ( KiB/s): min=10360, max=42032, per=88.89%, avg=24215.18, stdev=9463.34, samples=11 00:09:46.358 iops : min= 2590, max=10508, avg=6053.73, stdev=2365.93, samples=11 00:09:46.358 lat (usec) : 500=0.01%, 750=0.02%, 1000=0.01% 00:09:46.358 lat (msec) : 2=0.14%, 4=6.75%, 10=88.37%, 20=4.71% 00:09:46.358 cpu : usr=5.81%, sys=22.61%, ctx=6097, majf=0, minf=133 00:09:46.358 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:09:46.358 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.358 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:46.358 issued rwts: total=68704,36190,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:46.358 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:46.358 00:09:46.358 Run status group 0 (all jobs): 00:09:46.358 READ: bw=44.7MiB/s (46.8MB/s), 44.7MiB/s-44.7MiB/s (46.8MB/s-46.8MB/s), io=268MiB (281MB), run=6007-6007msec 00:09:46.358 WRITE: bw=26.6MiB/s (27.9MB/s), 26.6MiB/s-26.6MiB/s (27.9MB/s-27.9MB/s), io=141MiB (148MB), run=5314-5314msec 00:09:46.358 00:09:46.358 Disk stats (read/write): 00:09:46.358 nvme0n1: ios=67830/35569, merge=0/0, ticks=496919/213484, in_queue=710403, util=98.66% 00:09:46.358 08:22:38 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:46.358 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:46.358 08:22:38 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:46.358 08:22:38 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1219 -- # local i=0 00:09:46.358 08:22:38 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:46.358 08:22:38 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:46.358 08:22:38 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:46.358 08:22:38 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:46.358 08:22:38 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # return 0 00:09:46.358 08:22:38 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:46.616 08:22:38 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:09:46.616 08:22:38 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:09:46.616 08:22:38 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:09:46.616 08:22:38 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:09:46.617 08:22:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:46.617 08:22:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:09:46.617 08:22:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:46.617 08:22:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:09:46.617 08:22:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:46.617 08:22:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:46.617 rmmod nvme_tcp 00:09:46.617 rmmod nvme_fabrics 00:09:46.617 rmmod nvme_keyring 00:09:46.617 08:22:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:46.617 08:22:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:09:46.617 08:22:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:09:46.617 08:22:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n 67288 ']' 00:09:46.617 08:22:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@490 -- # killprocess 67288 00:09:46.617 08:22:38 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@948 -- # '[' -z 67288 ']' 00:09:46.617 08:22:38 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@952 -- # kill -0 67288 00:09:46.617 08:22:38 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@953 -- # uname 00:09:46.617 08:22:38 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:46.617 08:22:38 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67288 00:09:46.617 killing process with pid 67288 00:09:46.617 08:22:38 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:46.617 08:22:38 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:46.617 08:22:38 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67288' 00:09:46.617 08:22:38 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@967 -- # kill 67288 00:09:46.617 08:22:38 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@972 -- # wait 67288 00:09:46.875 08:22:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:46.875 08:22:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:46.875 08:22:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:46.875 08:22:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:46.875 08:22:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:46.875 08:22:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:46.875 08:22:39 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:46.875 08:22:39 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:47.133 08:22:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:47.133 00:09:47.133 real 0m19.369s 00:09:47.133 user 1m12.384s 00:09:47.133 sys 0m9.726s 00:09:47.133 08:22:39 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:47.133 08:22:39 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:47.133 ************************************ 00:09:47.133 END TEST nvmf_target_multipath 00:09:47.133 ************************************ 00:09:47.133 08:22:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:47.133 08:22:39 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:47.133 08:22:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:47.133 08:22:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:47.133 08:22:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:47.133 ************************************ 00:09:47.133 START TEST nvmf_zcopy 00:09:47.133 ************************************ 00:09:47.133 08:22:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:47.133 * Looking for test storage... 00:09:47.133 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:47.133 08:22:39 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:47.133 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:47.133 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:47.133 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:47.133 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:47.133 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:47.133 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:47.133 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:47.133 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:47.134 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:47.134 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:47.134 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:47.134 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:09:47.134 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:09:47.134 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:47.134 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:47.134 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:47.134 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:47.134 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:47.134 08:22:39 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:47.134 08:22:39 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:47.134 08:22:39 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:47.134 08:22:39 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.134 08:22:39 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.134 08:22:39 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.134 08:22:39 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:47.134 08:22:39 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.134 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:09:47.134 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:47.134 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:47.134 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:47.134 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:47.134 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:47.134 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:47.134 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:47.134 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:47.134 08:22:39 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:47.134 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:47.134 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:47.134 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:47.134 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:47.134 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:47.134 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:47.134 08:22:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:47.134 08:22:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:47.134 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:47.134 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:47.134 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:47.134 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:47.134 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:47.134 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:47.134 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:47.134 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:47.134 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:47.134 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:47.134 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:47.134 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:47.134 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:47.134 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:47.134 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:47.134 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:47.134 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:47.134 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:47.134 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:47.134 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:47.134 Cannot find device "nvmf_tgt_br" 00:09:47.134 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # true 00:09:47.134 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:47.134 Cannot find device "nvmf_tgt_br2" 00:09:47.134 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # true 00:09:47.134 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:47.134 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:47.134 Cannot find device "nvmf_tgt_br" 00:09:47.134 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # true 00:09:47.134 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:47.134 Cannot find device "nvmf_tgt_br2" 00:09:47.134 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # true 00:09:47.134 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:47.393 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:47.393 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:47.393 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:47.393 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:09:47.393 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:47.393 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:47.393 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:09:47.393 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:47.393 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:47.393 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:47.393 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:47.393 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:47.393 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:47.393 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:47.393 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:47.393 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:47.393 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:47.393 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:47.393 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:47.393 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:47.393 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:47.393 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:47.393 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:47.393 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:47.393 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:47.393 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:47.393 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:47.393 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:47.393 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:47.393 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:47.393 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:47.393 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:47.393 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.103 ms 00:09:47.393 00:09:47.393 --- 10.0.0.2 ping statistics --- 00:09:47.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:47.393 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:09:47.393 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:47.393 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:47.393 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:09:47.393 00:09:47.393 --- 10.0.0.3 ping statistics --- 00:09:47.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:47.393 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:09:47.393 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:47.393 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:47.393 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:09:47.393 00:09:47.393 --- 10.0.0.1 ping statistics --- 00:09:47.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:47.393 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:09:47.393 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:47.393 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@433 -- # return 0 00:09:47.393 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:47.393 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:47.393 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:47.393 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:47.393 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:47.393 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:47.393 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:47.393 08:22:39 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:47.393 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:47.393 08:22:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:47.393 08:22:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:47.393 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=67756 00:09:47.393 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:47.393 08:22:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 67756 00:09:47.393 08:22:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 67756 ']' 00:09:47.393 08:22:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:47.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:47.393 08:22:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:47.393 08:22:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:47.393 08:22:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:47.393 08:22:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:47.652 [2024-07-15 08:22:39.616649] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:47.652 [2024-07-15 08:22:39.616769] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:47.652 [2024-07-15 08:22:39.756338] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:47.910 [2024-07-15 08:22:39.872174] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:47.910 [2024-07-15 08:22:39.872239] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:47.910 [2024-07-15 08:22:39.872251] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:47.910 [2024-07-15 08:22:39.872260] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:47.910 [2024-07-15 08:22:39.872267] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:47.910 [2024-07-15 08:22:39.872293] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:47.910 [2024-07-15 08:22:39.924175] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:48.526 08:22:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:48.526 08:22:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:09:48.526 08:22:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:48.526 08:22:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:48.526 08:22:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:48.526 08:22:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:48.526 08:22:40 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:48.526 08:22:40 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:48.526 08:22:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:48.526 08:22:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:48.526 [2024-07-15 08:22:40.650935] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:48.526 08:22:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:48.526 08:22:40 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:48.526 08:22:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:48.526 08:22:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:48.526 08:22:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:48.526 08:22:40 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:48.526 08:22:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:48.526 08:22:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:48.526 [2024-07-15 08:22:40.667047] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:48.526 08:22:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:48.526 08:22:40 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:48.526 08:22:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:48.526 08:22:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:48.526 08:22:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:48.526 08:22:40 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:48.526 08:22:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:48.526 08:22:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:48.526 malloc0 00:09:48.526 08:22:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:48.526 08:22:40 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:48.526 08:22:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:48.526 08:22:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:48.784 08:22:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:48.784 08:22:40 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:48.784 08:22:40 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:48.784 08:22:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:09:48.784 08:22:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:09:48.784 08:22:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:48.784 08:22:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:48.784 { 00:09:48.784 "params": { 00:09:48.784 "name": "Nvme$subsystem", 00:09:48.784 "trtype": "$TEST_TRANSPORT", 00:09:48.784 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:48.784 "adrfam": "ipv4", 00:09:48.784 "trsvcid": "$NVMF_PORT", 00:09:48.784 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:48.784 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:48.784 "hdgst": ${hdgst:-false}, 00:09:48.784 "ddgst": ${ddgst:-false} 00:09:48.784 }, 00:09:48.784 "method": "bdev_nvme_attach_controller" 00:09:48.784 } 00:09:48.784 EOF 00:09:48.784 )") 00:09:48.784 08:22:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:09:48.784 08:22:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:09:48.784 08:22:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:09:48.784 08:22:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:48.784 "params": { 00:09:48.784 "name": "Nvme1", 00:09:48.784 "trtype": "tcp", 00:09:48.785 "traddr": "10.0.0.2", 00:09:48.785 "adrfam": "ipv4", 00:09:48.785 "trsvcid": "4420", 00:09:48.785 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:48.785 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:48.785 "hdgst": false, 00:09:48.785 "ddgst": false 00:09:48.785 }, 00:09:48.785 "method": "bdev_nvme_attach_controller" 00:09:48.785 }' 00:09:48.785 [2024-07-15 08:22:40.771436] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:48.785 [2024-07-15 08:22:40.771560] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67789 ] 00:09:48.785 [2024-07-15 08:22:40.916007] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:49.042 [2024-07-15 08:22:41.045094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.042 [2024-07-15 08:22:41.106324] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:49.300 Running I/O for 10 seconds... 00:09:59.288 00:09:59.288 Latency(us) 00:09:59.288 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:59.288 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:59.288 Verification LBA range: start 0x0 length 0x1000 00:09:59.288 Nvme1n1 : 10.01 6068.15 47.41 0.00 0.00 21025.70 407.74 30980.65 00:09:59.288 =================================================================================================================== 00:09:59.288 Total : 6068.15 47.41 0.00 0.00 21025.70 407.74 30980.65 00:09:59.546 08:22:51 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=67911 00:09:59.546 08:22:51 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:59.546 08:22:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:59.546 08:22:51 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:59.546 08:22:51 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:59.546 08:22:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:09:59.546 08:22:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:09:59.546 08:22:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:59.546 08:22:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:59.546 { 00:09:59.546 "params": { 00:09:59.546 "name": "Nvme$subsystem", 00:09:59.546 "trtype": "$TEST_TRANSPORT", 00:09:59.546 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:59.546 "adrfam": "ipv4", 00:09:59.546 "trsvcid": "$NVMF_PORT", 00:09:59.546 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:59.546 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:59.546 "hdgst": ${hdgst:-false}, 00:09:59.546 "ddgst": ${ddgst:-false} 00:09:59.546 }, 00:09:59.546 "method": "bdev_nvme_attach_controller" 00:09:59.546 } 00:09:59.546 EOF 00:09:59.546 )") 00:09:59.546 08:22:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:09:59.546 [2024-07-15 08:22:51.476712] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.546 [2024-07-15 08:22:51.476766] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.546 08:22:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:09:59.546 08:22:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:09:59.546 08:22:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:59.546 "params": { 00:09:59.546 "name": "Nvme1", 00:09:59.546 "trtype": "tcp", 00:09:59.546 "traddr": "10.0.0.2", 00:09:59.546 "adrfam": "ipv4", 00:09:59.546 "trsvcid": "4420", 00:09:59.546 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:59.546 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:59.546 "hdgst": false, 00:09:59.546 "ddgst": false 00:09:59.546 }, 00:09:59.546 "method": "bdev_nvme_attach_controller" 00:09:59.546 }' 00:09:59.546 [2024-07-15 08:22:51.488688] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.546 [2024-07-15 08:22:51.488731] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.546 [2024-07-15 08:22:51.496679] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.546 [2024-07-15 08:22:51.496709] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.546 [2024-07-15 08:22:51.504675] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.546 [2024-07-15 08:22:51.504705] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.546 [2024-07-15 08:22:51.512683] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.546 [2024-07-15 08:22:51.512713] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.546 [2024-07-15 08:22:51.520679] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.546 [2024-07-15 08:22:51.520710] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.546 [2024-07-15 08:22:51.532697] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.546 [2024-07-15 08:22:51.532748] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.546 [2024-07-15 08:22:51.537097] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:59.546 [2024-07-15 08:22:51.537205] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67911 ] 00:09:59.546 [2024-07-15 08:22:51.544705] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.547 [2024-07-15 08:22:51.544923] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.547 [2024-07-15 08:22:51.556709] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.547 [2024-07-15 08:22:51.556901] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.547 [2024-07-15 08:22:51.568706] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.547 [2024-07-15 08:22:51.568890] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.547 [2024-07-15 08:22:51.580716] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.547 [2024-07-15 08:22:51.580908] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.547 [2024-07-15 08:22:51.592715] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.547 [2024-07-15 08:22:51.592937] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.547 [2024-07-15 08:22:51.604744] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.547 [2024-07-15 08:22:51.604997] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.547 [2024-07-15 08:22:51.616741] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.547 [2024-07-15 08:22:51.616965] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.547 [2024-07-15 08:22:51.628743] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.547 [2024-07-15 08:22:51.628965] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.547 [2024-07-15 08:22:51.640749] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.547 [2024-07-15 08:22:51.640789] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.547 [2024-07-15 08:22:51.652752] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.547 [2024-07-15 08:22:51.652792] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.547 [2024-07-15 08:22:51.664750] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.547 [2024-07-15 08:22:51.664794] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.547 [2024-07-15 08:22:51.676756] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.547 [2024-07-15 08:22:51.676797] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.547 [2024-07-15 08:22:51.684597] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:59.547 [2024-07-15 08:22:51.688761] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.547 [2024-07-15 08:22:51.688800] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.547 [2024-07-15 08:22:51.700772] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.547 [2024-07-15 08:22:51.700815] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.547 [2024-07-15 08:22:51.712766] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.547 [2024-07-15 08:22:51.712806] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.805 [2024-07-15 08:22:51.724765] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.806 [2024-07-15 08:22:51.724804] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.806 [2024-07-15 08:22:51.736773] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.806 [2024-07-15 08:22:51.736811] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.806 [2024-07-15 08:22:51.748780] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.806 [2024-07-15 08:22:51.748821] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.806 [2024-07-15 08:22:51.760804] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.806 [2024-07-15 08:22:51.760850] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.806 [2024-07-15 08:22:51.772781] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.806 [2024-07-15 08:22:51.772823] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.806 [2024-07-15 08:22:51.784809] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.806 [2024-07-15 08:22:51.784858] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.806 [2024-07-15 08:22:51.796793] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.806 [2024-07-15 08:22:51.796832] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.806 [2024-07-15 08:22:51.802262] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.806 [2024-07-15 08:22:51.808785] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.806 [2024-07-15 08:22:51.808823] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.806 [2024-07-15 08:22:51.820806] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.806 [2024-07-15 08:22:51.820852] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.806 [2024-07-15 08:22:51.832808] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.806 [2024-07-15 08:22:51.832853] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.806 [2024-07-15 08:22:51.844811] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.806 [2024-07-15 08:22:51.844854] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.806 [2024-07-15 08:22:51.856814] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.806 [2024-07-15 08:22:51.856857] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.806 [2024-07-15 08:22:51.863898] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:59.806 [2024-07-15 08:22:51.868831] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.806 [2024-07-15 08:22:51.868879] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.806 [2024-07-15 08:22:51.880834] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.806 [2024-07-15 08:22:51.880878] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.806 [2024-07-15 08:22:51.892820] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.806 [2024-07-15 08:22:51.892860] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.806 [2024-07-15 08:22:51.904806] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.806 [2024-07-15 08:22:51.904844] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.806 [2024-07-15 08:22:51.916854] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.806 [2024-07-15 08:22:51.916902] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.806 [2024-07-15 08:22:51.928834] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.806 [2024-07-15 08:22:51.928878] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.806 [2024-07-15 08:22:51.940840] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.806 [2024-07-15 08:22:51.940883] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.806 [2024-07-15 08:22:51.952866] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.806 [2024-07-15 08:22:51.952911] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.806 [2024-07-15 08:22:51.964853] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.806 [2024-07-15 08:22:51.964897] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.065 [2024-07-15 08:22:51.976868] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.065 [2024-07-15 08:22:51.976915] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.065 Running I/O for 5 seconds... 00:10:00.065 [2024-07-15 08:22:51.988869] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.065 [2024-07-15 08:22:51.988908] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.065 [2024-07-15 08:22:52.006670] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.065 [2024-07-15 08:22:52.006739] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.065 [2024-07-15 08:22:52.021595] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.065 [2024-07-15 08:22:52.021821] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.065 [2024-07-15 08:22:52.037623] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.065 [2024-07-15 08:22:52.037841] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.065 [2024-07-15 08:22:52.054698] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.065 [2024-07-15 08:22:52.054911] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.065 [2024-07-15 08:22:52.071736] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.065 [2024-07-15 08:22:52.071981] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.065 [2024-07-15 08:22:52.087703] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.065 [2024-07-15 08:22:52.087766] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.065 [2024-07-15 08:22:52.106690] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.065 [2024-07-15 08:22:52.106757] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.065 [2024-07-15 08:22:52.121086] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.065 [2024-07-15 08:22:52.121138] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.065 [2024-07-15 08:22:52.136934] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.065 [2024-07-15 08:22:52.136985] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.065 [2024-07-15 08:22:52.153613] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.065 [2024-07-15 08:22:52.153667] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.065 [2024-07-15 08:22:52.169694] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.065 [2024-07-15 08:22:52.169755] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.065 [2024-07-15 08:22:52.185713] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.065 [2024-07-15 08:22:52.185785] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.065 [2024-07-15 08:22:52.203292] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.065 [2024-07-15 08:22:52.203347] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.065 [2024-07-15 08:22:52.219946] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.065 [2024-07-15 08:22:52.219998] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.323 [2024-07-15 08:22:52.236815] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.323 [2024-07-15 08:22:52.236872] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.323 [2024-07-15 08:22:52.253764] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.323 [2024-07-15 08:22:52.253818] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.323 [2024-07-15 08:22:52.271451] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.323 [2024-07-15 08:22:52.271506] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.323 [2024-07-15 08:22:52.286155] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.323 [2024-07-15 08:22:52.286210] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.323 [2024-07-15 08:22:52.297700] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.323 [2024-07-15 08:22:52.297767] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.323 [2024-07-15 08:22:52.314651] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.323 [2024-07-15 08:22:52.314704] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.323 [2024-07-15 08:22:52.331524] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.323 [2024-07-15 08:22:52.331582] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.323 [2024-07-15 08:22:52.347899] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.323 [2024-07-15 08:22:52.347956] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.323 [2024-07-15 08:22:52.364847] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.323 [2024-07-15 08:22:52.364904] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.323 [2024-07-15 08:22:52.380702] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.323 [2024-07-15 08:22:52.380771] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.323 [2024-07-15 08:22:52.399495] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.323 [2024-07-15 08:22:52.399558] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.323 [2024-07-15 08:22:52.414397] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.323 [2024-07-15 08:22:52.414455] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.323 [2024-07-15 08:22:52.423493] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.323 [2024-07-15 08:22:52.423544] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.323 [2024-07-15 08:22:52.439693] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.323 [2024-07-15 08:22:52.439760] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.323 [2024-07-15 08:22:52.449344] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.323 [2024-07-15 08:22:52.449399] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.323 [2024-07-15 08:22:52.465466] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.323 [2024-07-15 08:22:52.465529] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.323 [2024-07-15 08:22:52.475030] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.323 [2024-07-15 08:22:52.475081] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.323 [2024-07-15 08:22:52.490818] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.323 [2024-07-15 08:22:52.490879] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.582 [2024-07-15 08:22:52.503375] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.582 [2024-07-15 08:22:52.503428] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.582 [2024-07-15 08:22:52.518797] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.582 [2024-07-15 08:22:52.518850] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.582 [2024-07-15 08:22:52.528478] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.582 [2024-07-15 08:22:52.528526] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.582 [2024-07-15 08:22:52.544957] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.582 [2024-07-15 08:22:52.545009] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.582 [2024-07-15 08:22:52.561955] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.582 [2024-07-15 08:22:52.562011] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.582 [2024-07-15 08:22:52.579001] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.582 [2024-07-15 08:22:52.579061] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.582 [2024-07-15 08:22:52.589589] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.582 [2024-07-15 08:22:52.589640] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.582 [2024-07-15 08:22:52.603937] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.582 [2024-07-15 08:22:52.604007] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.582 [2024-07-15 08:22:52.614193] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.582 [2024-07-15 08:22:52.614253] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.582 [2024-07-15 08:22:52.628536] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.582 [2024-07-15 08:22:52.628595] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.582 [2024-07-15 08:22:52.644893] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.582 [2024-07-15 08:22:52.644952] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.582 [2024-07-15 08:22:52.661733] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.582 [2024-07-15 08:22:52.661794] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.582 [2024-07-15 08:22:52.678105] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.582 [2024-07-15 08:22:52.678163] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.582 [2024-07-15 08:22:52.694927] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.582 [2024-07-15 08:22:52.694985] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.582 [2024-07-15 08:22:52.711810] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.582 [2024-07-15 08:22:52.711864] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.582 [2024-07-15 08:22:52.727755] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.582 [2024-07-15 08:22:52.727814] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.582 [2024-07-15 08:22:52.745285] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.582 [2024-07-15 08:22:52.745343] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.841 [2024-07-15 08:22:52.761877] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.841 [2024-07-15 08:22:52.761931] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.841 [2024-07-15 08:22:52.778538] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.841 [2024-07-15 08:22:52.778595] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.841 [2024-07-15 08:22:52.794827] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.841 [2024-07-15 08:22:52.794882] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.841 [2024-07-15 08:22:52.811935] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.841 [2024-07-15 08:22:52.811992] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.841 [2024-07-15 08:22:52.828143] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.841 [2024-07-15 08:22:52.828201] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.841 [2024-07-15 08:22:52.845224] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.841 [2024-07-15 08:22:52.845284] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.841 [2024-07-15 08:22:52.861829] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.841 [2024-07-15 08:22:52.861889] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.841 [2024-07-15 08:22:52.877970] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.841 [2024-07-15 08:22:52.878032] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.841 [2024-07-15 08:22:52.894753] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.841 [2024-07-15 08:22:52.894814] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.841 [2024-07-15 08:22:52.910990] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.841 [2024-07-15 08:22:52.911049] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.841 [2024-07-15 08:22:52.928359] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.841 [2024-07-15 08:22:52.928414] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.841 [2024-07-15 08:22:52.944749] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.841 [2024-07-15 08:22:52.944801] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.841 [2024-07-15 08:22:52.961880] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.841 [2024-07-15 08:22:52.961945] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.841 [2024-07-15 08:22:52.977750] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.841 [2024-07-15 08:22:52.977801] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.841 [2024-07-15 08:22:52.995750] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.841 [2024-07-15 08:22:52.995804] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.100 [2024-07-15 08:22:53.012407] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.100 [2024-07-15 08:22:53.012461] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.100 [2024-07-15 08:22:53.028983] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.100 [2024-07-15 08:22:53.029038] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.100 [2024-07-15 08:22:53.045566] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.100 [2024-07-15 08:22:53.045634] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.100 [2024-07-15 08:22:53.062592] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.100 [2024-07-15 08:22:53.062642] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.100 [2024-07-15 08:22:53.078630] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.100 [2024-07-15 08:22:53.078686] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.100 [2024-07-15 08:22:53.095410] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.100 [2024-07-15 08:22:53.095470] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.100 [2024-07-15 08:22:53.112351] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.100 [2024-07-15 08:22:53.112413] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.100 [2024-07-15 08:22:53.129909] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.100 [2024-07-15 08:22:53.129970] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.100 [2024-07-15 08:22:53.145491] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.100 [2024-07-15 08:22:53.145551] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.100 [2024-07-15 08:22:53.155276] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.100 [2024-07-15 08:22:53.155325] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.100 [2024-07-15 08:22:53.170827] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.100 [2024-07-15 08:22:53.170885] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.100 [2024-07-15 08:22:53.190163] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.100 [2024-07-15 08:22:53.190222] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.100 [2024-07-15 08:22:53.204833] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.100 [2024-07-15 08:22:53.204894] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.100 [2024-07-15 08:22:53.214742] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.100 [2024-07-15 08:22:53.214794] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.100 [2024-07-15 08:22:53.230859] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.100 [2024-07-15 08:22:53.230909] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.100 [2024-07-15 08:22:53.242589] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.100 [2024-07-15 08:22:53.242640] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.100 [2024-07-15 08:22:53.257935] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.100 [2024-07-15 08:22:53.257993] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.100 [2024-07-15 08:22:53.267763] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.100 [2024-07-15 08:22:53.267815] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.358 [2024-07-15 08:22:53.282439] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.358 [2024-07-15 08:22:53.282492] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.358 [2024-07-15 08:22:53.298335] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.358 [2024-07-15 08:22:53.298390] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.358 [2024-07-15 08:22:53.315115] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.358 [2024-07-15 08:22:53.315178] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.358 [2024-07-15 08:22:53.331838] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.358 [2024-07-15 08:22:53.331904] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.358 [2024-07-15 08:22:53.349139] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.358 [2024-07-15 08:22:53.349197] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.358 [2024-07-15 08:22:53.363476] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.358 [2024-07-15 08:22:53.363533] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.358 [2024-07-15 08:22:53.379090] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.358 [2024-07-15 08:22:53.379150] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.358 [2024-07-15 08:22:53.396658] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.358 [2024-07-15 08:22:53.396730] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.358 [2024-07-15 08:22:53.413066] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.358 [2024-07-15 08:22:53.413120] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.358 [2024-07-15 08:22:53.429868] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.358 [2024-07-15 08:22:53.429931] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.358 [2024-07-15 08:22:53.446075] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.358 [2024-07-15 08:22:53.446133] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.358 [2024-07-15 08:22:53.461810] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.358 [2024-07-15 08:22:53.461861] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.358 [2024-07-15 08:22:53.473398] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.358 [2024-07-15 08:22:53.473445] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.358 [2024-07-15 08:22:53.490671] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.358 [2024-07-15 08:22:53.490739] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.358 [2024-07-15 08:22:53.505040] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.358 [2024-07-15 08:22:53.505093] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.358 [2024-07-15 08:22:53.521021] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.358 [2024-07-15 08:22:53.521072] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.616 [2024-07-15 08:22:53.537687] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.616 [2024-07-15 08:22:53.537757] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.616 [2024-07-15 08:22:53.555200] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.616 [2024-07-15 08:22:53.555254] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.616 [2024-07-15 08:22:53.571672] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.616 [2024-07-15 08:22:53.571733] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.616 [2024-07-15 08:22:53.589108] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.616 [2024-07-15 08:22:53.589163] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.616 [2024-07-15 08:22:53.605839] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.616 [2024-07-15 08:22:53.605892] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.616 [2024-07-15 08:22:53.622205] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.616 [2024-07-15 08:22:53.622260] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.616 [2024-07-15 08:22:53.639455] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.616 [2024-07-15 08:22:53.639515] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.616 [2024-07-15 08:22:53.655229] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.616 [2024-07-15 08:22:53.655286] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.616 [2024-07-15 08:22:53.671934] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.616 [2024-07-15 08:22:53.671988] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.616 [2024-07-15 08:22:53.689232] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.616 [2024-07-15 08:22:53.689288] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.616 [2024-07-15 08:22:53.705774] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.616 [2024-07-15 08:22:53.705827] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.616 [2024-07-15 08:22:53.722682] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.616 [2024-07-15 08:22:53.722752] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.616 [2024-07-15 08:22:53.739359] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.616 [2024-07-15 08:22:53.739416] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.616 [2024-07-15 08:22:53.756137] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.616 [2024-07-15 08:22:53.756191] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.616 [2024-07-15 08:22:53.773639] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.616 [2024-07-15 08:22:53.773696] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.873 [2024-07-15 08:22:53.790181] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.873 [2024-07-15 08:22:53.790238] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.873 [2024-07-15 08:22:53.806280] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.873 [2024-07-15 08:22:53.806336] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.873 [2024-07-15 08:22:53.823654] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.873 [2024-07-15 08:22:53.823711] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.873 [2024-07-15 08:22:53.840430] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.873 [2024-07-15 08:22:53.840486] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.873 [2024-07-15 08:22:53.857172] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.873 [2024-07-15 08:22:53.857227] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.873 [2024-07-15 08:22:53.874681] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.873 [2024-07-15 08:22:53.874747] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.873 [2024-07-15 08:22:53.889032] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.873 [2024-07-15 08:22:53.889087] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.873 [2024-07-15 08:22:53.904823] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.873 [2024-07-15 08:22:53.904874] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.873 [2024-07-15 08:22:53.921503] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.873 [2024-07-15 08:22:53.921556] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.874 [2024-07-15 08:22:53.939219] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.874 [2024-07-15 08:22:53.939278] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.874 [2024-07-15 08:22:53.953386] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.874 [2024-07-15 08:22:53.953437] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.874 [2024-07-15 08:22:53.970662] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.874 [2024-07-15 08:22:53.970739] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.874 [2024-07-15 08:22:53.985233] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.874 [2024-07-15 08:22:53.985297] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.874 [2024-07-15 08:22:54.001170] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.874 [2024-07-15 08:22:54.001228] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.874 [2024-07-15 08:22:54.018080] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.874 [2024-07-15 08:22:54.018135] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.874 [2024-07-15 08:22:54.034635] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.874 [2024-07-15 08:22:54.034692] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.132 [2024-07-15 08:22:54.052010] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.132 [2024-07-15 08:22:54.052076] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.132 [2024-07-15 08:22:54.066534] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.132 [2024-07-15 08:22:54.066585] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.132 [2024-07-15 08:22:54.081858] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.132 [2024-07-15 08:22:54.081908] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.132 [2024-07-15 08:22:54.091380] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.132 [2024-07-15 08:22:54.091428] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.132 [2024-07-15 08:22:54.107692] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.132 [2024-07-15 08:22:54.107760] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.132 [2024-07-15 08:22:54.117650] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.132 [2024-07-15 08:22:54.117697] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.132 [2024-07-15 08:22:54.132280] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.132 [2024-07-15 08:22:54.132335] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.132 [2024-07-15 08:22:54.149735] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.132 [2024-07-15 08:22:54.149790] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.132 [2024-07-15 08:22:54.165767] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.132 [2024-07-15 08:22:54.165825] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.132 [2024-07-15 08:22:54.182932] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.132 [2024-07-15 08:22:54.182984] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.132 [2024-07-15 08:22:54.200378] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.132 [2024-07-15 08:22:54.200440] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.132 [2024-07-15 08:22:54.210558] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.132 [2024-07-15 08:22:54.210611] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.132 [2024-07-15 08:22:54.225212] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.132 [2024-07-15 08:22:54.225278] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.132 [2024-07-15 08:22:54.243078] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.132 [2024-07-15 08:22:54.243139] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.132 [2024-07-15 08:22:54.257535] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.132 [2024-07-15 08:22:54.257590] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.132 [2024-07-15 08:22:54.274818] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.132 [2024-07-15 08:22:54.274876] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.132 [2024-07-15 08:22:54.289564] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.132 [2024-07-15 08:22:54.289625] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.390 [2024-07-15 08:22:54.305440] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.390 [2024-07-15 08:22:54.305496] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.390 [2024-07-15 08:22:54.322464] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.390 [2024-07-15 08:22:54.322519] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.390 [2024-07-15 08:22:54.338371] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.390 [2024-07-15 08:22:54.338427] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.390 [2024-07-15 08:22:54.355134] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.390 [2024-07-15 08:22:54.355185] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.390 [2024-07-15 08:22:54.373801] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.390 [2024-07-15 08:22:54.373851] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.390 [2024-07-15 08:22:54.388094] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.390 [2024-07-15 08:22:54.388145] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.390 [2024-07-15 08:22:54.404222] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.390 [2024-07-15 08:22:54.404278] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.390 [2024-07-15 08:22:54.421162] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.390 [2024-07-15 08:22:54.421213] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.390 [2024-07-15 08:22:54.437167] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.390 [2024-07-15 08:22:54.437217] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.390 [2024-07-15 08:22:54.453817] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.390 [2024-07-15 08:22:54.453866] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.390 [2024-07-15 08:22:54.470118] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.390 [2024-07-15 08:22:54.470170] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.390 [2024-07-15 08:22:54.486605] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.390 [2024-07-15 08:22:54.486655] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.390 [2024-07-15 08:22:54.502851] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.390 [2024-07-15 08:22:54.502900] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.390 [2024-07-15 08:22:54.520989] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.390 [2024-07-15 08:22:54.521042] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.390 [2024-07-15 08:22:54.535511] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.390 [2024-07-15 08:22:54.535561] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.390 [2024-07-15 08:22:54.550672] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.390 [2024-07-15 08:22:54.550731] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.649 [2024-07-15 08:22:54.567884] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.649 [2024-07-15 08:22:54.567946] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.649 [2024-07-15 08:22:54.582080] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.649 [2024-07-15 08:22:54.582127] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.649 [2024-07-15 08:22:54.597190] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.649 [2024-07-15 08:22:54.597241] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.649 [2024-07-15 08:22:54.606558] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.649 [2024-07-15 08:22:54.606607] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.649 [2024-07-15 08:22:54.622580] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.649 [2024-07-15 08:22:54.622635] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.649 [2024-07-15 08:22:54.637701] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.649 [2024-07-15 08:22:54.637775] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.649 [2024-07-15 08:22:54.647413] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.649 [2024-07-15 08:22:54.647459] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.649 [2024-07-15 08:22:54.663370] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.649 [2024-07-15 08:22:54.663436] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.649 [2024-07-15 08:22:54.672814] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.649 [2024-07-15 08:22:54.672874] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.649 [2024-07-15 08:22:54.689087] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.649 [2024-07-15 08:22:54.689154] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.649 [2024-07-15 08:22:54.705207] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.649 [2024-07-15 08:22:54.705518] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.649 [2024-07-15 08:22:54.715743] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.649 [2024-07-15 08:22:54.716056] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.650 [2024-07-15 08:22:54.727434] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.650 [2024-07-15 08:22:54.727602] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.650 [2024-07-15 08:22:54.738110] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.650 [2024-07-15 08:22:54.738266] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.650 [2024-07-15 08:22:54.748885] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.650 [2024-07-15 08:22:54.749069] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.650 [2024-07-15 08:22:54.763250] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.650 [2024-07-15 08:22:54.763436] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.650 [2024-07-15 08:22:54.779743] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.650 [2024-07-15 08:22:54.779940] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.650 [2024-07-15 08:22:54.795154] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.650 [2024-07-15 08:22:54.795339] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.650 [2024-07-15 08:22:54.805306] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.650 [2024-07-15 08:22:54.805451] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.650 [2024-07-15 08:22:54.820056] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.650 [2024-07-15 08:22:54.820241] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.971 [2024-07-15 08:22:54.829430] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.971 [2024-07-15 08:22:54.829555] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.971 [2024-07-15 08:22:54.844702] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.971 [2024-07-15 08:22:54.844931] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.971 [2024-07-15 08:22:54.862100] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.971 [2024-07-15 08:22:54.862348] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.971 [2024-07-15 08:22:54.878401] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.971 [2024-07-15 08:22:54.878604] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.971 [2024-07-15 08:22:54.896103] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.971 [2024-07-15 08:22:54.896272] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.971 [2024-07-15 08:22:54.911609] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.971 [2024-07-15 08:22:54.911805] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.971 [2024-07-15 08:22:54.930344] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.971 [2024-07-15 08:22:54.930531] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.971 [2024-07-15 08:22:54.944815] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.971 [2024-07-15 08:22:54.944971] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.971 [2024-07-15 08:22:54.960438] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.971 [2024-07-15 08:22:54.960604] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.971 [2024-07-15 08:22:54.970004] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.971 [2024-07-15 08:22:54.970147] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.971 [2024-07-15 08:22:54.985177] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.971 [2024-07-15 08:22:54.985375] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.971 [2024-07-15 08:22:55.000884] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.971 [2024-07-15 08:22:55.001058] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.971 [2024-07-15 08:22:55.010061] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.971 [2024-07-15 08:22:55.010214] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.971 [2024-07-15 08:22:55.026156] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.971 [2024-07-15 08:22:55.026384] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.971 [2024-07-15 08:22:55.043499] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.971 [2024-07-15 08:22:55.043673] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.971 [2024-07-15 08:22:55.058436] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.971 [2024-07-15 08:22:55.058573] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.971 [2024-07-15 08:22:55.074380] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.971 [2024-07-15 08:22:55.074601] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.971 [2024-07-15 08:22:55.091870] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.971 [2024-07-15 08:22:55.092093] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.971 [2024-07-15 08:22:55.106748] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.971 [2024-07-15 08:22:55.106941] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.971 [2024-07-15 08:22:55.122621] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.971 [2024-07-15 08:22:55.122821] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.971 [2024-07-15 08:22:55.141214] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.971 [2024-07-15 08:22:55.141435] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.229 [2024-07-15 08:22:55.156538] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.229 [2024-07-15 08:22:55.156797] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.229 [2024-07-15 08:22:55.173221] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.229 [2024-07-15 08:22:55.173434] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.229 [2024-07-15 08:22:55.182941] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.229 [2024-07-15 08:22:55.183102] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.229 [2024-07-15 08:22:55.198176] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.229 [2024-07-15 08:22:55.198349] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.229 [2024-07-15 08:22:55.214941] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.229 [2024-07-15 08:22:55.215121] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.229 [2024-07-15 08:22:55.231105] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.229 [2024-07-15 08:22:55.231271] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.229 [2024-07-15 08:22:55.248167] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.229 [2024-07-15 08:22:55.248356] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.229 [2024-07-15 08:22:55.264328] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.229 [2024-07-15 08:22:55.264542] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.229 [2024-07-15 08:22:55.281640] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.229 [2024-07-15 08:22:55.281842] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.229 [2024-07-15 08:22:55.296513] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.229 [2024-07-15 08:22:55.296759] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.229 [2024-07-15 08:22:55.312465] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.229 [2024-07-15 08:22:55.312671] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.229 [2024-07-15 08:22:55.331598] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.229 [2024-07-15 08:22:55.331852] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.229 [2024-07-15 08:22:55.346218] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.229 [2024-07-15 08:22:55.346396] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.229 [2024-07-15 08:22:55.356417] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.229 [2024-07-15 08:22:55.356581] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.229 [2024-07-15 08:22:55.370820] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.229 [2024-07-15 08:22:55.370993] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.229 [2024-07-15 08:22:55.380383] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.229 [2024-07-15 08:22:55.380553] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.229 [2024-07-15 08:22:55.395701] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.229 [2024-07-15 08:22:55.395934] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.488 [2024-07-15 08:22:55.412116] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.488 [2024-07-15 08:22:55.412281] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.488 [2024-07-15 08:22:55.428605] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.488 [2024-07-15 08:22:55.428813] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.488 [2024-07-15 08:22:55.446013] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.488 [2024-07-15 08:22:55.446205] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.488 [2024-07-15 08:22:55.460743] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.488 [2024-07-15 08:22:55.460947] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.488 [2024-07-15 08:22:55.476983] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.488 [2024-07-15 08:22:55.477155] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.488 [2024-07-15 08:22:55.494253] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.488 [2024-07-15 08:22:55.494435] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.488 [2024-07-15 08:22:55.508711] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.488 [2024-07-15 08:22:55.509106] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.488 [2024-07-15 08:22:55.526051] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.488 [2024-07-15 08:22:55.526210] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.488 [2024-07-15 08:22:55.540771] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.488 [2024-07-15 08:22:55.540933] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.488 [2024-07-15 08:22:55.557286] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.488 [2024-07-15 08:22:55.557439] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.488 [2024-07-15 08:22:55.573743] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.488 [2024-07-15 08:22:55.573889] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.488 [2024-07-15 08:22:55.590814] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.488 [2024-07-15 08:22:55.591032] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.488 [2024-07-15 08:22:55.605355] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.488 [2024-07-15 08:22:55.605527] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.488 [2024-07-15 08:22:55.621616] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.488 [2024-07-15 08:22:55.621841] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.488 [2024-07-15 08:22:55.638809] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.488 [2024-07-15 08:22:55.639024] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.488 [2024-07-15 08:22:55.654650] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.488 [2024-07-15 08:22:55.654704] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.747 [2024-07-15 08:22:55.671979] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.747 [2024-07-15 08:22:55.672043] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.748 [2024-07-15 08:22:55.688582] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.748 [2024-07-15 08:22:55.688640] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.748 [2024-07-15 08:22:55.705632] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.748 [2024-07-15 08:22:55.705686] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.748 [2024-07-15 08:22:55.721334] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.748 [2024-07-15 08:22:55.721392] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.748 [2024-07-15 08:22:55.740031] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.748 [2024-07-15 08:22:55.740093] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.748 [2024-07-15 08:22:55.755189] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.748 [2024-07-15 08:22:55.755257] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.748 [2024-07-15 08:22:55.772805] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.748 [2024-07-15 08:22:55.772873] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.748 [2024-07-15 08:22:55.787872] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.748 [2024-07-15 08:22:55.787943] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.748 [2024-07-15 08:22:55.797701] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.748 [2024-07-15 08:22:55.797773] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.748 [2024-07-15 08:22:55.813913] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.748 [2024-07-15 08:22:55.813997] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.748 [2024-07-15 08:22:55.831141] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.748 [2024-07-15 08:22:55.831205] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.748 [2024-07-15 08:22:55.847319] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.748 [2024-07-15 08:22:55.847381] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.748 [2024-07-15 08:22:55.865909] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.748 [2024-07-15 08:22:55.865974] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.748 [2024-07-15 08:22:55.880683] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.748 [2024-07-15 08:22:55.880747] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.748 [2024-07-15 08:22:55.896457] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.748 [2024-07-15 08:22:55.896514] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.748 [2024-07-15 08:22:55.913431] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.748 [2024-07-15 08:22:55.913492] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.006 [2024-07-15 08:22:55.930435] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.006 [2024-07-15 08:22:55.930497] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.006 [2024-07-15 08:22:55.947634] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.006 [2024-07-15 08:22:55.947705] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.006 [2024-07-15 08:22:55.963338] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.006 [2024-07-15 08:22:55.963399] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.006 [2024-07-15 08:22:55.972694] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.006 [2024-07-15 08:22:55.972760] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.006 [2024-07-15 08:22:55.988946] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.006 [2024-07-15 08:22:55.989010] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.006 [2024-07-15 08:22:55.998621] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.006 [2024-07-15 08:22:55.998678] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.006 [2024-07-15 08:22:56.014650] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.006 [2024-07-15 08:22:56.014705] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.006 [2024-07-15 08:22:56.024035] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.006 [2024-07-15 08:22:56.024078] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.006 [2024-07-15 08:22:56.039956] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.006 [2024-07-15 08:22:56.040016] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.006 [2024-07-15 08:22:56.056593] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.006 [2024-07-15 08:22:56.056646] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.006 [2024-07-15 08:22:56.072785] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.006 [2024-07-15 08:22:56.072832] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.006 [2024-07-15 08:22:56.091578] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.006 [2024-07-15 08:22:56.091626] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.006 [2024-07-15 08:22:56.106111] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.006 [2024-07-15 08:22:56.106158] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.006 [2024-07-15 08:22:56.116113] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.006 [2024-07-15 08:22:56.116154] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.006 [2024-07-15 08:22:56.130788] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.006 [2024-07-15 08:22:56.130835] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.006 [2024-07-15 08:22:56.141279] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.006 [2024-07-15 08:22:56.141325] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.006 [2024-07-15 08:22:56.155891] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.006 [2024-07-15 08:22:56.155950] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.006 [2024-07-15 08:22:56.165651] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.006 [2024-07-15 08:22:56.165702] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.266 [2024-07-15 08:22:56.178947] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.266 [2024-07-15 08:22:56.178993] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.266 [2024-07-15 08:22:56.194710] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.266 [2024-07-15 08:22:56.194773] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.266 [2024-07-15 08:22:56.204770] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.266 [2024-07-15 08:22:56.204818] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.266 [2024-07-15 08:22:56.219264] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.266 [2024-07-15 08:22:56.219320] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.266 [2024-07-15 08:22:56.229279] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.266 [2024-07-15 08:22:56.229338] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.266 [2024-07-15 08:22:56.244310] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.266 [2024-07-15 08:22:56.244371] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.266 [2024-07-15 08:22:56.261466] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.266 [2024-07-15 08:22:56.261525] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.266 [2024-07-15 08:22:56.279075] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.266 [2024-07-15 08:22:56.279130] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.266 [2024-07-15 08:22:56.293451] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.266 [2024-07-15 08:22:56.293502] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.266 [2024-07-15 08:22:56.309317] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.266 [2024-07-15 08:22:56.309368] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.266 [2024-07-15 08:22:56.327980] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.266 [2024-07-15 08:22:56.328035] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.266 [2024-07-15 08:22:56.342350] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.266 [2024-07-15 08:22:56.342408] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.266 [2024-07-15 08:22:56.358200] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.266 [2024-07-15 08:22:56.358251] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.266 [2024-07-15 08:22:56.376075] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.266 [2024-07-15 08:22:56.376128] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.266 [2024-07-15 08:22:56.390824] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.266 [2024-07-15 08:22:56.390871] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.266 [2024-07-15 08:22:56.400455] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.266 [2024-07-15 08:22:56.400501] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.266 [2024-07-15 08:22:56.416258] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.266 [2024-07-15 08:22:56.416308] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.266 [2024-07-15 08:22:56.426247] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.266 [2024-07-15 08:22:56.426292] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.524 [2024-07-15 08:22:56.442099] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.524 [2024-07-15 08:22:56.442162] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.524 [2024-07-15 08:22:56.452004] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.524 [2024-07-15 08:22:56.452052] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.524 [2024-07-15 08:22:56.467793] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.524 [2024-07-15 08:22:56.467849] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.524 [2024-07-15 08:22:56.479463] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.524 [2024-07-15 08:22:56.479510] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.524 [2024-07-15 08:22:56.496509] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.524 [2024-07-15 08:22:56.496560] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.524 [2024-07-15 08:22:56.513371] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.524 [2024-07-15 08:22:56.513421] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.524 [2024-07-15 08:22:56.529485] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.524 [2024-07-15 08:22:56.529537] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.524 [2024-07-15 08:22:56.545516] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.524 [2024-07-15 08:22:56.545569] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.524 [2024-07-15 08:22:56.562472] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.524 [2024-07-15 08:22:56.562526] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.524 [2024-07-15 08:22:56.580159] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.524 [2024-07-15 08:22:56.580215] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.524 [2024-07-15 08:22:56.594175] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.524 [2024-07-15 08:22:56.594225] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.524 [2024-07-15 08:22:56.609924] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.524 [2024-07-15 08:22:56.609968] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.524 [2024-07-15 08:22:56.628321] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.524 [2024-07-15 08:22:56.628384] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.524 [2024-07-15 08:22:56.643137] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.524 [2024-07-15 08:22:56.643185] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.524 [2024-07-15 08:22:56.658460] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.524 [2024-07-15 08:22:56.658511] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.524 [2024-07-15 08:22:56.667320] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.524 [2024-07-15 08:22:56.667362] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.524 [2024-07-15 08:22:56.683536] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.524 [2024-07-15 08:22:56.683587] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.524 [2024-07-15 08:22:56.692925] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.524 [2024-07-15 08:22:56.692966] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.782 [2024-07-15 08:22:56.708534] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.782 [2024-07-15 08:22:56.708587] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.782 [2024-07-15 08:22:56.726016] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.782 [2024-07-15 08:22:56.726067] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.782 [2024-07-15 08:22:56.740279] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.782 [2024-07-15 08:22:56.740330] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.782 [2024-07-15 08:22:56.755577] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.782 [2024-07-15 08:22:56.755623] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.782 [2024-07-15 08:22:56.764504] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.782 [2024-07-15 08:22:56.764547] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.782 [2024-07-15 08:22:56.780412] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.782 [2024-07-15 08:22:56.780463] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.782 [2024-07-15 08:22:56.789921] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.783 [2024-07-15 08:22:56.789970] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.783 [2024-07-15 08:22:56.805865] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.783 [2024-07-15 08:22:56.805919] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.783 [2024-07-15 08:22:56.815877] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.783 [2024-07-15 08:22:56.815937] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.783 [2024-07-15 08:22:56.830584] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.783 [2024-07-15 08:22:56.830638] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.783 [2024-07-15 08:22:56.847214] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.783 [2024-07-15 08:22:56.847268] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.783 [2024-07-15 08:22:56.863885] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.783 [2024-07-15 08:22:56.863948] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.783 [2024-07-15 08:22:56.879910] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.783 [2024-07-15 08:22:56.879962] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.783 [2024-07-15 08:22:56.898366] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.783 [2024-07-15 08:22:56.898420] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.783 [2024-07-15 08:22:56.914213] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.783 [2024-07-15 08:22:56.914262] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.783 [2024-07-15 08:22:56.931169] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.783 [2024-07-15 08:22:56.931221] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.783 [2024-07-15 08:22:56.947005] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.783 [2024-07-15 08:22:56.947060] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.041 [2024-07-15 08:22:56.965474] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.041 [2024-07-15 08:22:56.965527] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.041 [2024-07-15 08:22:56.980193] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.041 [2024-07-15 08:22:56.980242] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.041 [2024-07-15 08:22:56.994168] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.041 [2024-07-15 08:22:56.994219] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.041 00:10:05.041 Latency(us) 00:10:05.041 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:05.041 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:05.041 Nvme1n1 : 5.01 11885.39 92.85 0.00 0.00 10754.45 4736.47 19899.11 00:10:05.042 =================================================================================================================== 00:10:05.042 Total : 11885.39 92.85 0.00 0.00 10754.45 4736.47 19899.11 00:10:05.042 [2024-07-15 08:22:57.003481] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.042 [2024-07-15 08:22:57.003528] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.042 [2024-07-15 08:22:57.015494] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.042 [2024-07-15 08:22:57.015540] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.042 [2024-07-15 08:22:57.027509] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.042 [2024-07-15 08:22:57.027556] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.042 [2024-07-15 08:22:57.035499] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.042 [2024-07-15 08:22:57.035537] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.042 [2024-07-15 08:22:57.047506] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.042 [2024-07-15 08:22:57.047553] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.042 [2024-07-15 08:22:57.059518] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.042 [2024-07-15 08:22:57.059563] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.042 [2024-07-15 08:22:57.071512] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.042 [2024-07-15 08:22:57.071556] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.042 [2024-07-15 08:22:57.083514] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.042 [2024-07-15 08:22:57.083558] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.042 [2024-07-15 08:22:57.095516] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.042 [2024-07-15 08:22:57.095561] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.042 [2024-07-15 08:22:57.107523] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.042 [2024-07-15 08:22:57.107567] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.042 [2024-07-15 08:22:57.119528] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.042 [2024-07-15 08:22:57.119572] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.042 [2024-07-15 08:22:57.131519] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.042 [2024-07-15 08:22:57.131556] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.042 [2024-07-15 08:22:57.143521] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.042 [2024-07-15 08:22:57.143559] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.042 [2024-07-15 08:22:57.155534] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.042 [2024-07-15 08:22:57.155576] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.042 [2024-07-15 08:22:57.167558] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.042 [2024-07-15 08:22:57.167603] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.042 [2024-07-15 08:22:57.179531] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.042 [2024-07-15 08:22:57.179569] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.042 [2024-07-15 08:22:57.191550] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.042 [2024-07-15 08:22:57.191599] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.042 [2024-07-15 08:22:57.203541] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.042 [2024-07-15 08:22:57.203584] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.300 [2024-07-15 08:22:57.215538] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.300 [2024-07-15 08:22:57.215576] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.300 [2024-07-15 08:22:57.227533] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.300 [2024-07-15 08:22:57.227565] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.301 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (67911) - No such process 00:10:05.301 08:22:57 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 67911 00:10:05.301 08:22:57 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:05.301 08:22:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:05.301 08:22:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:05.301 08:22:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:05.301 08:22:57 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:05.301 08:22:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:05.301 08:22:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:05.301 delay0 00:10:05.301 08:22:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:05.301 08:22:57 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:05.301 08:22:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:05.301 08:22:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:05.301 08:22:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:05.301 08:22:57 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:05.301 [2024-07-15 08:22:57.435075] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:11.860 Initializing NVMe Controllers 00:10:11.860 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:11.860 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:11.860 Initialization complete. Launching workers. 00:10:11.860 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 85 00:10:11.860 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 372, failed to submit 33 00:10:11.860 success 247, unsuccess 125, failed 0 00:10:11.860 08:23:03 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:11.860 08:23:03 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:11.860 08:23:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:11.860 08:23:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:10:11.860 08:23:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:11.860 08:23:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:10:11.860 08:23:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:11.860 08:23:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:11.860 rmmod nvme_tcp 00:10:11.860 rmmod nvme_fabrics 00:10:11.860 rmmod nvme_keyring 00:10:11.860 08:23:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:11.860 08:23:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:10:11.860 08:23:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:10:11.860 08:23:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 67756 ']' 00:10:11.860 08:23:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 67756 00:10:11.860 08:23:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 67756 ']' 00:10:11.860 08:23:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 67756 00:10:11.860 08:23:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:10:11.860 08:23:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:11.860 08:23:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67756 00:10:11.860 killing process with pid 67756 00:10:11.860 08:23:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:11.860 08:23:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:11.860 08:23:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67756' 00:10:11.860 08:23:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 67756 00:10:11.860 08:23:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 67756 00:10:11.860 08:23:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:11.860 08:23:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:11.860 08:23:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:11.860 08:23:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:11.860 08:23:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:11.860 08:23:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:11.860 08:23:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:11.860 08:23:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:11.860 08:23:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:11.860 00:10:11.860 real 0m24.791s 00:10:11.860 user 0m41.118s 00:10:11.860 sys 0m6.523s 00:10:11.860 08:23:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:11.860 08:23:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:11.860 ************************************ 00:10:11.860 END TEST nvmf_zcopy 00:10:11.860 ************************************ 00:10:11.860 08:23:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:11.860 08:23:03 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:11.860 08:23:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:11.860 08:23:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:11.860 08:23:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:11.860 ************************************ 00:10:11.860 START TEST nvmf_nmic 00:10:11.860 ************************************ 00:10:11.860 08:23:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:11.860 * Looking for test storage... 00:10:12.133 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:12.133 08:23:04 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:12.133 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:12.133 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:12.133 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:12.133 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:12.133 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:12.133 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:12.133 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:12.133 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:12.133 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:12.133 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:12.133 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:12.133 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:10:12.133 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:10:12.133 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:12.133 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:12.133 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:12.133 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:12.133 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:12.133 08:23:04 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:12.133 08:23:04 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:12.133 08:23:04 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:12.133 08:23:04 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.133 08:23:04 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.133 08:23:04 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.133 08:23:04 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:12.133 08:23:04 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.133 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:10:12.133 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:12.133 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:12.133 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:12.133 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:12.133 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:12.133 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:12.133 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:12.133 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:12.133 08:23:04 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:12.133 08:23:04 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:12.133 08:23:04 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:12.133 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:12.133 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:12.133 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:12.133 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:12.133 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:12.133 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:12.133 08:23:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:12.133 08:23:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:12.133 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:12.133 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:12.133 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:12.133 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:12.133 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:12.133 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:12.133 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:12.133 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:12.133 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:12.133 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:12.133 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:12.133 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:12.133 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:12.133 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:12.133 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:12.133 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:12.133 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:12.133 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:12.133 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:12.133 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:12.133 Cannot find device "nvmf_tgt_br" 00:10:12.133 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # true 00:10:12.133 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:12.133 Cannot find device "nvmf_tgt_br2" 00:10:12.133 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # true 00:10:12.133 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:12.133 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:12.133 Cannot find device "nvmf_tgt_br" 00:10:12.133 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # true 00:10:12.133 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:12.133 Cannot find device "nvmf_tgt_br2" 00:10:12.133 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # true 00:10:12.133 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:12.133 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:12.133 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:12.133 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:12.133 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:10:12.133 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:12.133 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:12.133 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:10:12.133 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:12.133 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:12.133 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:12.133 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:12.133 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:12.133 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:12.133 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:12.133 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:12.133 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:12.133 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:12.133 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:12.133 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:12.133 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:12.133 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:12.393 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:12.393 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:12.393 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:12.393 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:12.393 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:12.393 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:12.393 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:12.393 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:12.393 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:12.393 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:12.393 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:12.393 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.103 ms 00:10:12.393 00:10:12.393 --- 10.0.0.2 ping statistics --- 00:10:12.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.393 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:10:12.393 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:12.393 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:12.393 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:10:12.393 00:10:12.393 --- 10.0.0.3 ping statistics --- 00:10:12.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.393 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:10:12.393 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:12.393 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:12.393 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:10:12.393 00:10:12.393 --- 10.0.0.1 ping statistics --- 00:10:12.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.393 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:10:12.393 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:12.393 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@433 -- # return 0 00:10:12.393 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:12.393 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:12.393 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:12.393 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:12.393 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:12.393 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:12.393 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:12.393 08:23:04 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:12.393 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:12.393 08:23:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:12.393 08:23:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:12.393 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=68241 00:10:12.393 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 68241 00:10:12.393 08:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:12.393 08:23:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 68241 ']' 00:10:12.393 08:23:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:12.393 08:23:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:12.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:12.393 08:23:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:12.393 08:23:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:12.393 08:23:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:12.393 [2024-07-15 08:23:04.480391] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:12.393 [2024-07-15 08:23:04.480507] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:12.651 [2024-07-15 08:23:04.624713] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:12.651 [2024-07-15 08:23:04.758246] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:12.651 [2024-07-15 08:23:04.758355] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:12.651 [2024-07-15 08:23:04.758376] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:12.651 [2024-07-15 08:23:04.758387] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:12.651 [2024-07-15 08:23:04.758396] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:12.651 [2024-07-15 08:23:04.758559] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:12.651 [2024-07-15 08:23:04.759266] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:12.651 [2024-07-15 08:23:04.759348] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:12.651 [2024-07-15 08:23:04.759357] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.651 [2024-07-15 08:23:04.816683] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:13.583 08:23:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:13.583 08:23:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:10:13.583 08:23:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:13.583 08:23:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:13.583 08:23:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:13.583 08:23:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:13.583 08:23:05 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:13.583 08:23:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:13.583 08:23:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:13.583 [2024-07-15 08:23:05.480155] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:13.583 08:23:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:13.583 08:23:05 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:13.583 08:23:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:13.583 08:23:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:13.583 Malloc0 00:10:13.583 08:23:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:13.583 08:23:05 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:13.583 08:23:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:13.583 08:23:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:13.583 08:23:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:13.583 08:23:05 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:13.583 08:23:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:13.583 08:23:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:13.583 08:23:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:13.583 08:23:05 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:13.583 08:23:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:13.583 08:23:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:13.583 [2024-07-15 08:23:05.545753] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:13.583 test case1: single bdev can't be used in multiple subsystems 00:10:13.583 08:23:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:13.583 08:23:05 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:13.583 08:23:05 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:13.583 08:23:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:13.583 08:23:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:13.583 08:23:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:13.583 08:23:05 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:13.583 08:23:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:13.583 08:23:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:13.583 08:23:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:13.583 08:23:05 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:13.583 08:23:05 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:13.583 08:23:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:13.583 08:23:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:13.583 [2024-07-15 08:23:05.569618] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:13.583 [2024-07-15 08:23:05.569674] subsystem.c:2083:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:13.583 [2024-07-15 08:23:05.569694] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.583 request: 00:10:13.583 { 00:10:13.583 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:13.583 "namespace": { 00:10:13.583 "bdev_name": "Malloc0", 00:10:13.583 "no_auto_visible": false 00:10:13.583 }, 00:10:13.583 "method": "nvmf_subsystem_add_ns", 00:10:13.583 "req_id": 1 00:10:13.583 } 00:10:13.583 Got JSON-RPC error response 00:10:13.583 response: 00:10:13.583 { 00:10:13.583 "code": -32602, 00:10:13.583 "message": "Invalid parameters" 00:10:13.583 } 00:10:13.583 Adding namespace failed - expected result. 00:10:13.583 test case2: host connect to nvmf target in multiple paths 00:10:13.583 08:23:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:10:13.583 08:23:05 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:13.583 08:23:05 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:13.583 08:23:05 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:13.583 08:23:05 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:13.583 08:23:05 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:13.583 08:23:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:13.583 08:23:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:13.583 [2024-07-15 08:23:05.581756] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:13.583 08:23:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:13.583 08:23:05 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --hostid=cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:13.583 08:23:05 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --hostid=cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:13.841 08:23:05 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:13.841 08:23:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:10:13.841 08:23:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:13.841 08:23:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:13.841 08:23:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:10:15.738 08:23:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:15.738 08:23:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:15.738 08:23:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:15.738 08:23:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:15.738 08:23:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:15.738 08:23:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:10:15.738 08:23:07 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:15.738 [global] 00:10:15.738 thread=1 00:10:15.738 invalidate=1 00:10:15.738 rw=write 00:10:15.738 time_based=1 00:10:15.738 runtime=1 00:10:15.738 ioengine=libaio 00:10:15.738 direct=1 00:10:15.738 bs=4096 00:10:15.738 iodepth=1 00:10:15.738 norandommap=0 00:10:15.738 numjobs=1 00:10:15.738 00:10:15.738 verify_dump=1 00:10:15.738 verify_backlog=512 00:10:15.738 verify_state_save=0 00:10:15.738 do_verify=1 00:10:15.738 verify=crc32c-intel 00:10:15.738 [job0] 00:10:15.738 filename=/dev/nvme0n1 00:10:15.738 Could not set queue depth (nvme0n1) 00:10:15.998 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:15.998 fio-3.35 00:10:15.998 Starting 1 thread 00:10:17.372 00:10:17.372 job0: (groupid=0, jobs=1): err= 0: pid=68328: Mon Jul 15 08:23:09 2024 00:10:17.372 read: IOPS=3007, BW=11.7MiB/s (12.3MB/s)(11.7MiB/1000msec) 00:10:17.372 slat (nsec): min=11830, max=54714, avg=15505.96, stdev=4346.71 00:10:17.372 clat (usec): min=142, max=268, avg=176.77, stdev=14.13 00:10:17.372 lat (usec): min=157, max=289, avg=192.27, stdev=15.91 00:10:17.372 clat percentiles (usec): 00:10:17.372 | 1.00th=[ 149], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 165], 00:10:17.372 | 30.00th=[ 169], 40.00th=[ 174], 50.00th=[ 178], 60.00th=[ 180], 00:10:17.372 | 70.00th=[ 184], 80.00th=[ 188], 90.00th=[ 194], 95.00th=[ 200], 00:10:17.372 | 99.00th=[ 219], 99.50th=[ 227], 99.90th=[ 243], 99.95th=[ 260], 00:10:17.372 | 99.99th=[ 269] 00:10:17.372 write: IOPS=3072, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1000msec); 0 zone resets 00:10:17.372 slat (usec): min=17, max=154, avg=24.70, stdev= 8.04 00:10:17.372 clat (usec): min=80, max=286, avg=108.81, stdev=13.64 00:10:17.372 lat (usec): min=106, max=393, avg=133.51, stdev=18.37 00:10:17.372 clat percentiles (usec): 00:10:17.372 | 1.00th=[ 89], 5.00th=[ 93], 10.00th=[ 95], 20.00th=[ 98], 00:10:17.372 | 30.00th=[ 101], 40.00th=[ 104], 50.00th=[ 106], 60.00th=[ 110], 00:10:17.372 | 70.00th=[ 113], 80.00th=[ 118], 90.00th=[ 129], 95.00th=[ 137], 00:10:17.372 | 99.00th=[ 149], 99.50th=[ 153], 99.90th=[ 163], 99.95th=[ 260], 00:10:17.372 | 99.99th=[ 285] 00:10:17.372 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:10:17.372 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:17.372 lat (usec) : 100=13.34%, 250=86.58%, 500=0.08% 00:10:17.372 cpu : usr=2.70%, sys=9.40%, ctx=6079, majf=0, minf=2 00:10:17.372 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:17.372 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:17.372 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:17.372 issued rwts: total=3007,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:17.372 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:17.372 00:10:17.372 Run status group 0 (all jobs): 00:10:17.372 READ: bw=11.7MiB/s (12.3MB/s), 11.7MiB/s-11.7MiB/s (12.3MB/s-12.3MB/s), io=11.7MiB (12.3MB), run=1000-1000msec 00:10:17.372 WRITE: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1000-1000msec 00:10:17.372 00:10:17.372 Disk stats (read/write): 00:10:17.372 nvme0n1: ios=2610/2947, merge=0/0, ticks=469/347, in_queue=816, util=91.28% 00:10:17.372 08:23:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:17.372 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:17.372 08:23:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:17.372 08:23:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:10:17.372 08:23:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:17.372 08:23:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:17.372 08:23:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:17.372 08:23:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:17.372 08:23:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:10:17.373 08:23:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:17.373 08:23:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:17.373 08:23:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:17.373 08:23:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:10:17.373 08:23:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:17.373 08:23:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:10:17.373 08:23:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:17.373 08:23:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:17.373 rmmod nvme_tcp 00:10:17.373 rmmod nvme_fabrics 00:10:17.373 rmmod nvme_keyring 00:10:17.373 08:23:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:17.373 08:23:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:10:17.373 08:23:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:10:17.373 08:23:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 68241 ']' 00:10:17.373 08:23:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 68241 00:10:17.373 08:23:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 68241 ']' 00:10:17.373 08:23:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 68241 00:10:17.373 08:23:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:10:17.373 08:23:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:17.373 08:23:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68241 00:10:17.373 killing process with pid 68241 00:10:17.373 08:23:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:17.373 08:23:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:17.373 08:23:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68241' 00:10:17.373 08:23:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 68241 00:10:17.373 08:23:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 68241 00:10:17.631 08:23:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:17.631 08:23:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:17.631 08:23:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:17.631 08:23:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:17.631 08:23:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:17.631 08:23:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:17.631 08:23:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:17.631 08:23:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:17.631 08:23:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:17.631 00:10:17.631 real 0m5.765s 00:10:17.631 user 0m18.294s 00:10:17.631 sys 0m2.264s 00:10:17.631 08:23:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:17.631 08:23:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:17.631 ************************************ 00:10:17.631 END TEST nvmf_nmic 00:10:17.631 ************************************ 00:10:17.631 08:23:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:17.631 08:23:09 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:17.631 08:23:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:17.631 08:23:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:17.631 08:23:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:17.631 ************************************ 00:10:17.631 START TEST nvmf_fio_target 00:10:17.631 ************************************ 00:10:17.631 08:23:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:17.890 * Looking for test storage... 00:10:17.890 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:17.890 08:23:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:17.890 08:23:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:17.890 08:23:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:17.890 08:23:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:17.890 08:23:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:17.890 08:23:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:17.890 08:23:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:17.890 08:23:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:17.890 08:23:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:17.890 08:23:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:17.890 08:23:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:17.890 08:23:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:17.890 08:23:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:10:17.890 08:23:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:10:17.890 08:23:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:17.890 08:23:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:17.890 08:23:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:17.890 08:23:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:17.890 08:23:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:17.890 08:23:09 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:17.890 08:23:09 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:17.890 08:23:09 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:17.890 08:23:09 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.890 08:23:09 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.890 08:23:09 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.890 08:23:09 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:17.890 08:23:09 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.890 08:23:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:10:17.890 08:23:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:17.890 08:23:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:17.890 08:23:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:17.890 08:23:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:17.890 08:23:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:17.890 08:23:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:17.890 08:23:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:17.890 08:23:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:17.890 08:23:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:17.890 08:23:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:17.890 08:23:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:17.890 08:23:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:17.890 08:23:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:17.890 08:23:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:17.890 08:23:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:17.890 08:23:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:17.890 08:23:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:17.890 08:23:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:17.890 08:23:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:17.890 08:23:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:17.890 08:23:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:17.890 08:23:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:17.890 08:23:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:17.890 08:23:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:17.891 08:23:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:17.891 08:23:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:17.891 08:23:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:17.891 08:23:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:17.891 08:23:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:17.891 08:23:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:17.891 08:23:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:17.891 08:23:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:17.891 08:23:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:17.891 08:23:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:17.891 08:23:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:17.891 08:23:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:17.891 08:23:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:17.891 08:23:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:17.891 08:23:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:17.891 08:23:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:17.891 Cannot find device "nvmf_tgt_br" 00:10:17.891 08:23:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # true 00:10:17.891 08:23:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:17.891 Cannot find device "nvmf_tgt_br2" 00:10:17.891 08:23:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # true 00:10:17.891 08:23:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:17.891 08:23:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:17.891 Cannot find device "nvmf_tgt_br" 00:10:17.891 08:23:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # true 00:10:17.891 08:23:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:17.891 Cannot find device "nvmf_tgt_br2" 00:10:17.891 08:23:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # true 00:10:17.891 08:23:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:17.891 08:23:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:17.891 08:23:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:17.891 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:17.891 08:23:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:10:17.891 08:23:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:17.891 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:17.891 08:23:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:10:17.891 08:23:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:17.891 08:23:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:17.891 08:23:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:17.891 08:23:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:17.891 08:23:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:17.891 08:23:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:18.149 08:23:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:18.150 08:23:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:18.150 08:23:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:18.150 08:23:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:18.150 08:23:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:18.150 08:23:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:18.150 08:23:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:18.150 08:23:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:18.150 08:23:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:18.150 08:23:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:18.150 08:23:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:18.150 08:23:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:18.150 08:23:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:18.150 08:23:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:18.150 08:23:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:18.150 08:23:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:18.150 08:23:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:18.150 08:23:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:18.150 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:18.150 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:10:18.150 00:10:18.150 --- 10.0.0.2 ping statistics --- 00:10:18.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:18.150 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:10:18.150 08:23:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:18.150 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:18.150 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:10:18.150 00:10:18.150 --- 10.0.0.3 ping statistics --- 00:10:18.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:18.150 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:10:18.150 08:23:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:18.150 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:18.150 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:10:18.150 00:10:18.150 --- 10.0.0.1 ping statistics --- 00:10:18.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:18.150 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:10:18.150 08:23:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:18.150 08:23:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@433 -- # return 0 00:10:18.150 08:23:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:18.150 08:23:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:18.150 08:23:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:18.150 08:23:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:18.150 08:23:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:18.150 08:23:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:18.150 08:23:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:18.150 08:23:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:18.150 08:23:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:18.150 08:23:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:18.150 08:23:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:18.150 08:23:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=68506 00:10:18.150 08:23:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:18.150 08:23:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 68506 00:10:18.150 08:23:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 68506 ']' 00:10:18.150 08:23:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:18.150 08:23:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:18.150 08:23:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:18.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:18.150 08:23:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:18.150 08:23:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:18.150 [2024-07-15 08:23:10.285456] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:18.150 [2024-07-15 08:23:10.285541] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:18.408 [2024-07-15 08:23:10.419419] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:18.408 [2024-07-15 08:23:10.535693] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:18.408 [2024-07-15 08:23:10.535778] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:18.408 [2024-07-15 08:23:10.535790] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:18.408 [2024-07-15 08:23:10.535799] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:18.408 [2024-07-15 08:23:10.535806] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:18.408 [2024-07-15 08:23:10.536023] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:18.408 [2024-07-15 08:23:10.536168] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:18.408 [2024-07-15 08:23:10.536894] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:18.408 [2024-07-15 08:23:10.536898] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.668 [2024-07-15 08:23:10.590086] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:19.234 08:23:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:19.234 08:23:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:10:19.234 08:23:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:19.234 08:23:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:19.234 08:23:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:19.234 08:23:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:19.234 08:23:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:19.493 [2024-07-15 08:23:11.566639] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:19.493 08:23:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:19.752 08:23:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:19.752 08:23:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:20.010 08:23:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:20.010 08:23:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:20.268 08:23:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:20.268 08:23:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:20.526 08:23:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:20.526 08:23:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:20.785 08:23:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:21.351 08:23:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:21.351 08:23:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:21.351 08:23:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:21.351 08:23:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:21.937 08:23:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:21.937 08:23:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:21.937 08:23:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:22.196 08:23:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:22.196 08:23:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:22.455 08:23:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:22.455 08:23:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:22.713 08:23:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:22.972 [2024-07-15 08:23:15.012317] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:22.972 08:23:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:23.231 08:23:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:23.491 08:23:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --hostid=cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:23.491 08:23:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:23.491 08:23:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:10:23.491 08:23:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:23.491 08:23:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:10:23.491 08:23:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:10:23.491 08:23:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:10:26.049 08:23:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:26.049 08:23:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:26.049 08:23:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:26.049 08:23:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:10:26.049 08:23:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:26.049 08:23:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:10:26.049 08:23:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:26.049 [global] 00:10:26.049 thread=1 00:10:26.049 invalidate=1 00:10:26.049 rw=write 00:10:26.049 time_based=1 00:10:26.049 runtime=1 00:10:26.049 ioengine=libaio 00:10:26.049 direct=1 00:10:26.049 bs=4096 00:10:26.049 iodepth=1 00:10:26.049 norandommap=0 00:10:26.049 numjobs=1 00:10:26.049 00:10:26.049 verify_dump=1 00:10:26.049 verify_backlog=512 00:10:26.049 verify_state_save=0 00:10:26.049 do_verify=1 00:10:26.049 verify=crc32c-intel 00:10:26.049 [job0] 00:10:26.049 filename=/dev/nvme0n1 00:10:26.049 [job1] 00:10:26.049 filename=/dev/nvme0n2 00:10:26.049 [job2] 00:10:26.049 filename=/dev/nvme0n3 00:10:26.049 [job3] 00:10:26.049 filename=/dev/nvme0n4 00:10:26.049 Could not set queue depth (nvme0n1) 00:10:26.049 Could not set queue depth (nvme0n2) 00:10:26.049 Could not set queue depth (nvme0n3) 00:10:26.049 Could not set queue depth (nvme0n4) 00:10:26.049 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:26.049 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:26.049 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:26.049 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:26.049 fio-3.35 00:10:26.049 Starting 4 threads 00:10:27.017 00:10:27.017 job0: (groupid=0, jobs=1): err= 0: pid=68697: Mon Jul 15 08:23:19 2024 00:10:27.017 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:10:27.017 slat (nsec): min=11238, max=54866, avg=14468.94, stdev=2342.12 00:10:27.017 clat (usec): min=131, max=1511, avg=163.90, stdev=30.59 00:10:27.017 lat (usec): min=149, max=1524, avg=178.37, stdev=30.84 00:10:27.017 clat percentiles (usec): 00:10:27.017 | 1.00th=[ 141], 5.00th=[ 147], 10.00th=[ 149], 20.00th=[ 153], 00:10:27.017 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 161], 60.00th=[ 165], 00:10:27.017 | 70.00th=[ 169], 80.00th=[ 174], 90.00th=[ 180], 95.00th=[ 184], 00:10:27.017 | 99.00th=[ 198], 99.50th=[ 212], 99.90th=[ 457], 99.95th=[ 635], 00:10:27.017 | 99.99th=[ 1516] 00:10:27.017 write: IOPS=3141, BW=12.3MiB/s (12.9MB/s)(12.3MiB/1001msec); 0 zone resets 00:10:27.017 slat (nsec): min=13918, max=84425, avg=20835.72, stdev=3961.68 00:10:27.017 clat (usec): min=91, max=495, avg=119.56, stdev=18.19 00:10:27.017 lat (usec): min=109, max=538, avg=140.40, stdev=19.17 00:10:27.017 clat percentiles (usec): 00:10:27.017 | 1.00th=[ 96], 5.00th=[ 101], 10.00th=[ 104], 20.00th=[ 110], 00:10:27.017 | 30.00th=[ 113], 40.00th=[ 116], 50.00th=[ 119], 60.00th=[ 122], 00:10:27.017 | 70.00th=[ 125], 80.00th=[ 128], 90.00th=[ 133], 95.00th=[ 139], 00:10:27.017 | 99.00th=[ 155], 99.50th=[ 227], 99.90th=[ 314], 99.95th=[ 420], 00:10:27.017 | 99.99th=[ 494] 00:10:27.017 bw ( KiB/s): min=12288, max=12288, per=29.82%, avg=12288.00, stdev= 0.00, samples=1 00:10:27.017 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:27.017 lat (usec) : 100=2.11%, 250=97.54%, 500=0.31%, 750=0.03% 00:10:27.017 lat (msec) : 2=0.02% 00:10:27.017 cpu : usr=2.10%, sys=8.80%, ctx=6218, majf=0, minf=14 00:10:27.017 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:27.017 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:27.017 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:27.017 issued rwts: total=3072,3145,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:27.017 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:27.017 job1: (groupid=0, jobs=1): err= 0: pid=68698: Mon Jul 15 08:23:19 2024 00:10:27.017 read: IOPS=1781, BW=7125KiB/s (7296kB/s)(7132KiB/1001msec) 00:10:27.017 slat (nsec): min=12208, max=49681, avg=16837.73, stdev=4905.51 00:10:27.017 clat (usec): min=175, max=7318, avg=298.16, stdev=262.42 00:10:27.017 lat (usec): min=189, max=7334, avg=315.00, stdev=263.35 00:10:27.017 clat percentiles (usec): 00:10:27.017 | 1.00th=[ 233], 5.00th=[ 243], 10.00th=[ 249], 20.00th=[ 255], 00:10:27.017 | 30.00th=[ 260], 40.00th=[ 265], 50.00th=[ 269], 60.00th=[ 273], 00:10:27.017 | 70.00th=[ 281], 80.00th=[ 289], 90.00th=[ 338], 95.00th=[ 482], 00:10:27.017 | 99.00th=[ 523], 99.50th=[ 537], 99.90th=[ 6456], 99.95th=[ 7308], 00:10:27.017 | 99.99th=[ 7308] 00:10:27.017 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:27.017 slat (usec): min=17, max=141, avg=21.43, stdev= 5.00 00:10:27.017 clat (usec): min=94, max=1983, avg=189.01, stdev=51.26 00:10:27.017 lat (usec): min=114, max=2016, avg=210.44, stdev=51.98 00:10:27.017 clat percentiles (usec): 00:10:27.017 | 1.00th=[ 102], 5.00th=[ 115], 10.00th=[ 129], 20.00th=[ 180], 00:10:27.017 | 30.00th=[ 188], 40.00th=[ 192], 50.00th=[ 196], 60.00th=[ 200], 00:10:27.017 | 70.00th=[ 204], 80.00th=[ 208], 90.00th=[ 212], 95.00th=[ 219], 00:10:27.017 | 99.00th=[ 249], 99.50th=[ 260], 99.90th=[ 392], 99.95th=[ 627], 00:10:27.017 | 99.99th=[ 1991] 00:10:27.017 bw ( KiB/s): min= 8192, max= 8192, per=19.88%, avg=8192.00, stdev= 0.00, samples=1 00:10:27.017 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:27.017 lat (usec) : 100=0.37%, 250=58.42%, 500=39.89%, 750=1.17% 00:10:27.017 lat (msec) : 2=0.03%, 4=0.08%, 10=0.05% 00:10:27.017 cpu : usr=1.70%, sys=5.80%, ctx=3832, majf=0, minf=7 00:10:27.017 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:27.017 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:27.017 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:27.018 issued rwts: total=1783,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:27.018 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:27.018 job2: (groupid=0, jobs=1): err= 0: pid=68699: Mon Jul 15 08:23:19 2024 00:10:27.018 read: IOPS=1919, BW=7676KiB/s (7861kB/s)(7684KiB/1001msec) 00:10:27.018 slat (nsec): min=11532, max=63276, avg=15464.13, stdev=3589.36 00:10:27.018 clat (usec): min=147, max=516, avg=270.87, stdev=45.82 00:10:27.018 lat (usec): min=160, max=531, avg=286.33, stdev=46.94 00:10:27.018 clat percentiles (usec): 00:10:27.018 | 1.00th=[ 159], 5.00th=[ 184], 10.00th=[ 241], 20.00th=[ 253], 00:10:27.018 | 30.00th=[ 260], 40.00th=[ 265], 50.00th=[ 269], 60.00th=[ 273], 00:10:27.018 | 70.00th=[ 277], 80.00th=[ 285], 90.00th=[ 306], 95.00th=[ 355], 00:10:27.018 | 99.00th=[ 461], 99.50th=[ 474], 99.90th=[ 494], 99.95th=[ 519], 00:10:27.018 | 99.99th=[ 519] 00:10:27.018 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:27.018 slat (nsec): min=16977, max=97583, avg=22175.73, stdev=4716.46 00:10:27.018 clat (usec): min=105, max=836, avg=193.91, stdev=40.96 00:10:27.018 lat (usec): min=123, max=856, avg=216.09, stdev=42.73 00:10:27.018 clat percentiles (usec): 00:10:27.018 | 1.00th=[ 122], 5.00th=[ 135], 10.00th=[ 143], 20.00th=[ 176], 00:10:27.018 | 30.00th=[ 184], 40.00th=[ 190], 50.00th=[ 194], 60.00th=[ 198], 00:10:27.018 | 70.00th=[ 202], 80.00th=[ 208], 90.00th=[ 217], 95.00th=[ 277], 00:10:27.018 | 99.00th=[ 347], 99.50th=[ 359], 99.90th=[ 379], 99.95th=[ 379], 00:10:27.018 | 99.99th=[ 840] 00:10:27.018 bw ( KiB/s): min= 8192, max= 8192, per=19.88%, avg=8192.00, stdev= 0.00, samples=1 00:10:27.018 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:27.018 lat (usec) : 250=57.14%, 500=42.81%, 750=0.03%, 1000=0.03% 00:10:27.018 cpu : usr=1.80%, sys=5.60%, ctx=3973, majf=0, minf=9 00:10:27.018 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:27.018 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:27.018 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:27.018 issued rwts: total=1921,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:27.018 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:27.018 job3: (groupid=0, jobs=1): err= 0: pid=68700: Mon Jul 15 08:23:19 2024 00:10:27.018 read: IOPS=2691, BW=10.5MiB/s (11.0MB/s)(10.5MiB/1001msec) 00:10:27.018 slat (nsec): min=11867, max=45291, avg=15910.26, stdev=3130.66 00:10:27.018 clat (usec): min=147, max=402, avg=175.60, stdev=12.95 00:10:27.018 lat (usec): min=161, max=418, avg=191.51, stdev=13.54 00:10:27.018 clat percentiles (usec): 00:10:27.018 | 1.00th=[ 153], 5.00th=[ 157], 10.00th=[ 161], 20.00th=[ 165], 00:10:27.018 | 30.00th=[ 169], 40.00th=[ 172], 50.00th=[ 176], 60.00th=[ 178], 00:10:27.018 | 70.00th=[ 182], 80.00th=[ 186], 90.00th=[ 192], 95.00th=[ 198], 00:10:27.018 | 99.00th=[ 208], 99.50th=[ 212], 99.90th=[ 229], 99.95th=[ 241], 00:10:27.018 | 99.99th=[ 404] 00:10:27.018 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:10:27.018 slat (usec): min=14, max=139, avg=22.87, stdev= 5.61 00:10:27.018 clat (usec): min=102, max=1632, avg=131.22, stdev=29.49 00:10:27.018 lat (usec): min=121, max=1658, avg=154.09, stdev=30.25 00:10:27.018 clat percentiles (usec): 00:10:27.018 | 1.00th=[ 108], 5.00th=[ 114], 10.00th=[ 118], 20.00th=[ 122], 00:10:27.018 | 30.00th=[ 125], 40.00th=[ 128], 50.00th=[ 131], 60.00th=[ 133], 00:10:27.018 | 70.00th=[ 137], 80.00th=[ 141], 90.00th=[ 145], 95.00th=[ 151], 00:10:27.018 | 99.00th=[ 161], 99.50th=[ 165], 99.90th=[ 221], 99.95th=[ 265], 00:10:27.018 | 99.99th=[ 1631] 00:10:27.018 bw ( KiB/s): min=12288, max=12288, per=29.82%, avg=12288.00, stdev= 0.00, samples=1 00:10:27.018 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:27.018 lat (usec) : 250=99.93%, 500=0.05% 00:10:27.018 lat (msec) : 2=0.02% 00:10:27.018 cpu : usr=2.70%, sys=8.50%, ctx=5766, majf=0, minf=5 00:10:27.018 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:27.018 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:27.018 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:27.018 issued rwts: total=2694,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:27.018 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:27.018 00:10:27.018 Run status group 0 (all jobs): 00:10:27.018 READ: bw=37.0MiB/s (38.8MB/s), 7125KiB/s-12.0MiB/s (7296kB/s-12.6MB/s), io=37.0MiB (38.8MB), run=1001-1001msec 00:10:27.018 WRITE: bw=40.2MiB/s (42.2MB/s), 8184KiB/s-12.3MiB/s (8380kB/s-12.9MB/s), io=40.3MiB (42.2MB), run=1001-1001msec 00:10:27.018 00:10:27.018 Disk stats (read/write): 00:10:27.018 nvme0n1: ios=2610/2837, merge=0/0, ticks=452/364, in_queue=816, util=88.78% 00:10:27.018 nvme0n2: ios=1582/1724, merge=0/0, ticks=483/348, in_queue=831, util=87.68% 00:10:27.018 nvme0n3: ios=1536/2004, merge=0/0, ticks=406/401, in_queue=807, util=89.21% 00:10:27.018 nvme0n4: ios=2409/2560, merge=0/0, ticks=427/363, in_queue=790, util=89.76% 00:10:27.018 08:23:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:27.018 [global] 00:10:27.018 thread=1 00:10:27.018 invalidate=1 00:10:27.018 rw=randwrite 00:10:27.018 time_based=1 00:10:27.018 runtime=1 00:10:27.018 ioengine=libaio 00:10:27.018 direct=1 00:10:27.018 bs=4096 00:10:27.018 iodepth=1 00:10:27.018 norandommap=0 00:10:27.018 numjobs=1 00:10:27.018 00:10:27.018 verify_dump=1 00:10:27.018 verify_backlog=512 00:10:27.018 verify_state_save=0 00:10:27.018 do_verify=1 00:10:27.018 verify=crc32c-intel 00:10:27.018 [job0] 00:10:27.018 filename=/dev/nvme0n1 00:10:27.018 [job1] 00:10:27.018 filename=/dev/nvme0n2 00:10:27.018 [job2] 00:10:27.018 filename=/dev/nvme0n3 00:10:27.018 [job3] 00:10:27.018 filename=/dev/nvme0n4 00:10:27.018 Could not set queue depth (nvme0n1) 00:10:27.018 Could not set queue depth (nvme0n2) 00:10:27.018 Could not set queue depth (nvme0n3) 00:10:27.018 Could not set queue depth (nvme0n4) 00:10:27.018 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:27.018 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:27.018 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:27.018 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:27.018 fio-3.35 00:10:27.018 Starting 4 threads 00:10:28.394 00:10:28.394 job0: (groupid=0, jobs=1): err= 0: pid=68753: Mon Jul 15 08:23:20 2024 00:10:28.394 read: IOPS=1854, BW=7417KiB/s (7595kB/s)(7424KiB/1001msec) 00:10:28.394 slat (nsec): min=8078, max=93198, avg=16243.81, stdev=6609.72 00:10:28.394 clat (usec): min=145, max=7602, avg=280.57, stdev=279.05 00:10:28.394 lat (usec): min=160, max=7615, avg=296.82, stdev=279.37 00:10:28.394 clat percentiles (usec): 00:10:28.394 | 1.00th=[ 169], 5.00th=[ 231], 10.00th=[ 237], 20.00th=[ 247], 00:10:28.394 | 30.00th=[ 251], 40.00th=[ 258], 50.00th=[ 262], 60.00th=[ 265], 00:10:28.394 | 70.00th=[ 273], 80.00th=[ 277], 90.00th=[ 293], 95.00th=[ 338], 00:10:28.394 | 99.00th=[ 420], 99.50th=[ 562], 99.90th=[ 6849], 99.95th=[ 7635], 00:10:28.394 | 99.99th=[ 7635] 00:10:28.394 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:28.394 slat (nsec): min=10009, max=97720, avg=20291.46, stdev=7549.89 00:10:28.394 clat (usec): min=104, max=415, avg=195.69, stdev=24.18 00:10:28.394 lat (usec): min=123, max=430, avg=215.98, stdev=26.21 00:10:28.394 clat percentiles (usec): 00:10:28.394 | 1.00th=[ 127], 5.00th=[ 149], 10.00th=[ 174], 20.00th=[ 184], 00:10:28.394 | 30.00th=[ 188], 40.00th=[ 192], 50.00th=[ 196], 60.00th=[ 200], 00:10:28.394 | 70.00th=[ 204], 80.00th=[ 208], 90.00th=[ 221], 95.00th=[ 233], 00:10:28.394 | 99.00th=[ 265], 99.50th=[ 273], 99.90th=[ 310], 99.95th=[ 330], 00:10:28.394 | 99.99th=[ 416] 00:10:28.394 bw ( KiB/s): min= 8192, max= 8192, per=20.02%, avg=8192.00, stdev= 0.00, samples=1 00:10:28.394 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:28.394 lat (usec) : 250=63.93%, 500=35.76%, 750=0.10%, 1000=0.03% 00:10:28.394 lat (msec) : 2=0.03%, 4=0.10%, 10=0.05% 00:10:28.394 cpu : usr=1.50%, sys=6.00%, ctx=3905, majf=0, minf=12 00:10:28.394 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:28.394 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:28.394 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:28.395 issued rwts: total=1856,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:28.395 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:28.395 job1: (groupid=0, jobs=1): err= 0: pid=68754: Mon Jul 15 08:23:20 2024 00:10:28.395 read: IOPS=2827, BW=11.0MiB/s (11.6MB/s)(11.1MiB/1001msec) 00:10:28.395 slat (usec): min=11, max=113, avg=18.68, stdev= 5.61 00:10:28.395 clat (usec): min=86, max=291, avg=169.95, stdev=12.13 00:10:28.395 lat (usec): min=153, max=309, avg=188.63, stdev=12.91 00:10:28.395 clat percentiles (usec): 00:10:28.395 | 1.00th=[ 147], 5.00th=[ 153], 10.00th=[ 155], 20.00th=[ 161], 00:10:28.395 | 30.00th=[ 165], 40.00th=[ 167], 50.00th=[ 169], 60.00th=[ 174], 00:10:28.395 | 70.00th=[ 176], 80.00th=[ 180], 90.00th=[ 184], 95.00th=[ 190], 00:10:28.395 | 99.00th=[ 200], 99.50th=[ 212], 99.90th=[ 249], 99.95th=[ 265], 00:10:28.395 | 99.99th=[ 293] 00:10:28.395 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:10:28.395 slat (nsec): min=14615, max=97517, avg=26136.33, stdev=5889.72 00:10:28.395 clat (usec): min=93, max=269, avg=121.75, stdev=10.73 00:10:28.395 lat (usec): min=112, max=302, avg=147.89, stdev=12.30 00:10:28.395 clat percentiles (usec): 00:10:28.395 | 1.00th=[ 101], 5.00th=[ 108], 10.00th=[ 111], 20.00th=[ 115], 00:10:28.395 | 30.00th=[ 117], 40.00th=[ 120], 50.00th=[ 122], 60.00th=[ 124], 00:10:28.395 | 70.00th=[ 126], 80.00th=[ 129], 90.00th=[ 135], 95.00th=[ 139], 00:10:28.395 | 99.00th=[ 149], 99.50th=[ 155], 99.90th=[ 206], 99.95th=[ 265], 00:10:28.395 | 99.99th=[ 269] 00:10:28.395 bw ( KiB/s): min=12288, max=12288, per=30.03%, avg=12288.00, stdev= 0.00, samples=1 00:10:28.395 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:28.395 lat (usec) : 100=0.41%, 250=99.51%, 500=0.08% 00:10:28.395 cpu : usr=2.60%, sys=10.40%, ctx=5913, majf=0, minf=13 00:10:28.395 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:28.395 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:28.395 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:28.395 issued rwts: total=2830,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:28.395 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:28.395 job2: (groupid=0, jobs=1): err= 0: pid=68755: Mon Jul 15 08:23:20 2024 00:10:28.395 read: IOPS=2488, BW=9954KiB/s (10.2MB/s)(9964KiB/1001msec) 00:10:28.395 slat (nsec): min=11107, max=37069, avg=13578.02, stdev=2919.63 00:10:28.395 clat (usec): min=146, max=1148, avg=204.05, stdev=64.42 00:10:28.395 lat (usec): min=158, max=1171, avg=217.63, stdev=65.20 00:10:28.395 clat percentiles (usec): 00:10:28.395 | 1.00th=[ 151], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 163], 00:10:28.395 | 30.00th=[ 167], 40.00th=[ 172], 50.00th=[ 178], 60.00th=[ 186], 00:10:28.395 | 70.00th=[ 243], 80.00th=[ 260], 90.00th=[ 273], 95.00th=[ 281], 00:10:28.395 | 99.00th=[ 363], 99.50th=[ 502], 99.90th=[ 979], 99.95th=[ 1057], 00:10:28.395 | 99.99th=[ 1156] 00:10:28.395 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:10:28.395 slat (nsec): min=13661, max=85843, avg=19688.07, stdev=3892.12 00:10:28.395 clat (usec): min=98, max=414, avg=155.80, stdev=40.11 00:10:28.395 lat (usec): min=116, max=443, avg=175.49, stdev=40.90 00:10:28.395 clat percentiles (usec): 00:10:28.395 | 1.00th=[ 105], 5.00th=[ 112], 10.00th=[ 115], 20.00th=[ 121], 00:10:28.395 | 30.00th=[ 125], 40.00th=[ 131], 50.00th=[ 137], 60.00th=[ 161], 00:10:28.395 | 70.00th=[ 190], 80.00th=[ 198], 90.00th=[ 206], 95.00th=[ 215], 00:10:28.395 | 99.00th=[ 269], 99.50th=[ 285], 99.90th=[ 326], 99.95th=[ 334], 00:10:28.395 | 99.99th=[ 416] 00:10:28.395 bw ( KiB/s): min= 8192, max= 8192, per=20.02%, avg=8192.00, stdev= 0.00, samples=1 00:10:28.395 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:28.395 lat (usec) : 100=0.02%, 250=85.82%, 500=13.88%, 750=0.16%, 1000=0.08% 00:10:28.395 lat (msec) : 2=0.04% 00:10:28.395 cpu : usr=2.00%, sys=6.60%, ctx=5052, majf=0, minf=15 00:10:28.395 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:28.395 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:28.395 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:28.395 issued rwts: total=2491,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:28.395 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:28.395 job3: (groupid=0, jobs=1): err= 0: pid=68756: Mon Jul 15 08:23:20 2024 00:10:28.395 read: IOPS=2273, BW=9095KiB/s (9313kB/s)(9104KiB/1001msec) 00:10:28.395 slat (usec): min=8, max=119, avg=15.69, stdev= 4.35 00:10:28.395 clat (usec): min=148, max=605, avg=214.95, stdev=54.30 00:10:28.395 lat (usec): min=162, max=619, avg=230.64, stdev=54.76 00:10:28.395 clat percentiles (usec): 00:10:28.395 | 1.00th=[ 153], 5.00th=[ 157], 10.00th=[ 161], 20.00th=[ 167], 00:10:28.395 | 30.00th=[ 174], 40.00th=[ 178], 50.00th=[ 188], 60.00th=[ 239], 00:10:28.395 | 70.00th=[ 253], 80.00th=[ 265], 90.00th=[ 277], 95.00th=[ 310], 00:10:28.395 | 99.00th=[ 355], 99.50th=[ 383], 99.90th=[ 562], 99.95th=[ 594], 00:10:28.395 | 99.99th=[ 603] 00:10:28.395 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:10:28.395 slat (usec): min=10, max=107, avg=22.31, stdev= 7.81 00:10:28.395 clat (usec): min=97, max=1823, avg=159.83, stdev=53.58 00:10:28.395 lat (usec): min=120, max=1843, avg=182.14, stdev=54.46 00:10:28.395 clat percentiles (usec): 00:10:28.395 | 1.00th=[ 109], 5.00th=[ 115], 10.00th=[ 120], 20.00th=[ 125], 00:10:28.395 | 30.00th=[ 130], 40.00th=[ 135], 50.00th=[ 143], 60.00th=[ 165], 00:10:28.395 | 70.00th=[ 186], 80.00th=[ 198], 90.00th=[ 212], 95.00th=[ 225], 00:10:28.395 | 99.00th=[ 260], 99.50th=[ 314], 99.90th=[ 594], 99.95th=[ 627], 00:10:28.395 | 99.99th=[ 1827] 00:10:28.395 bw ( KiB/s): min=12288, max=12288, per=30.03%, avg=12288.00, stdev= 0.00, samples=1 00:10:28.395 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:28.395 lat (usec) : 100=0.02%, 250=83.81%, 500=16.00%, 750=0.14% 00:10:28.395 lat (msec) : 2=0.02% 00:10:28.395 cpu : usr=1.70%, sys=7.90%, ctx=4840, majf=0, minf=5 00:10:28.395 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:28.395 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:28.395 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:28.395 issued rwts: total=2276,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:28.395 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:28.395 00:10:28.395 Run status group 0 (all jobs): 00:10:28.395 READ: bw=36.9MiB/s (38.7MB/s), 7417KiB/s-11.0MiB/s (7595kB/s-11.6MB/s), io=36.9MiB (38.7MB), run=1001-1001msec 00:10:28.395 WRITE: bw=40.0MiB/s (41.9MB/s), 8184KiB/s-12.0MiB/s (8380kB/s-12.6MB/s), io=40.0MiB (41.9MB), run=1001-1001msec 00:10:28.395 00:10:28.395 Disk stats (read/write): 00:10:28.395 nvme0n1: ios=1586/1883, merge=0/0, ticks=441/376, in_queue=817, util=88.58% 00:10:28.395 nvme0n2: ios=2579/2560, merge=0/0, ticks=472/341, in_queue=813, util=89.71% 00:10:28.395 nvme0n3: ios=2048/2248, merge=0/0, ticks=436/370, in_queue=806, util=89.34% 00:10:28.395 nvme0n4: ios=2048/2228, merge=0/0, ticks=427/368, in_queue=795, util=89.80% 00:10:28.395 08:23:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:28.395 [global] 00:10:28.395 thread=1 00:10:28.395 invalidate=1 00:10:28.395 rw=write 00:10:28.395 time_based=1 00:10:28.395 runtime=1 00:10:28.395 ioengine=libaio 00:10:28.395 direct=1 00:10:28.395 bs=4096 00:10:28.395 iodepth=128 00:10:28.395 norandommap=0 00:10:28.395 numjobs=1 00:10:28.395 00:10:28.395 verify_dump=1 00:10:28.395 verify_backlog=512 00:10:28.395 verify_state_save=0 00:10:28.395 do_verify=1 00:10:28.395 verify=crc32c-intel 00:10:28.395 [job0] 00:10:28.395 filename=/dev/nvme0n1 00:10:28.395 [job1] 00:10:28.395 filename=/dev/nvme0n2 00:10:28.395 [job2] 00:10:28.395 filename=/dev/nvme0n3 00:10:28.395 [job3] 00:10:28.395 filename=/dev/nvme0n4 00:10:28.395 Could not set queue depth (nvme0n1) 00:10:28.395 Could not set queue depth (nvme0n2) 00:10:28.395 Could not set queue depth (nvme0n3) 00:10:28.395 Could not set queue depth (nvme0n4) 00:10:28.395 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:28.395 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:28.395 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:28.395 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:28.395 fio-3.35 00:10:28.395 Starting 4 threads 00:10:29.770 00:10:29.770 job0: (groupid=0, jobs=1): err= 0: pid=68810: Mon Jul 15 08:23:21 2024 00:10:29.770 read: IOPS=3055, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1004msec) 00:10:29.770 slat (usec): min=6, max=9067, avg=163.05, stdev=749.82 00:10:29.770 clat (usec): min=1544, max=47560, avg=20054.45, stdev=7132.08 00:10:29.770 lat (usec): min=3939, max=47599, avg=20217.50, stdev=7180.58 00:10:29.770 clat percentiles (usec): 00:10:29.770 | 1.00th=[ 8225], 5.00th=[13435], 10.00th=[15008], 20.00th=[15533], 00:10:29.770 | 30.00th=[15664], 40.00th=[15926], 50.00th=[16319], 60.00th=[18220], 00:10:29.770 | 70.00th=[22938], 80.00th=[25035], 90.00th=[31327], 95.00th=[36439], 00:10:29.770 | 99.00th=[41157], 99.50th=[41681], 99.90th=[41681], 99.95th=[43779], 00:10:29.770 | 99.99th=[47449] 00:10:29.770 write: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec); 0 zone resets 00:10:29.770 slat (usec): min=11, max=8956, avg=154.03, stdev=722.50 00:10:29.770 clat (usec): min=10088, max=53160, avg=21285.18, stdev=9291.55 00:10:29.770 lat (usec): min=10112, max=53212, avg=21439.22, stdev=9367.01 00:10:29.770 clat percentiles (usec): 00:10:29.770 | 1.00th=[11076], 5.00th=[11994], 10.00th=[12256], 20.00th=[14353], 00:10:29.770 | 30.00th=[15008], 40.00th=[16057], 50.00th=[18220], 60.00th=[20579], 00:10:29.770 | 70.00th=[23987], 80.00th=[27132], 90.00th=[36963], 95.00th=[43254], 00:10:29.770 | 99.00th=[45351], 99.50th=[47973], 99.90th=[53216], 99.95th=[53216], 00:10:29.770 | 99.99th=[53216] 00:10:29.770 bw ( KiB/s): min=12263, max=12288, per=18.30%, avg=12275.50, stdev=17.68, samples=2 00:10:29.770 iops : min= 3065, max= 3072, avg=3068.50, stdev= 4.95, samples=2 00:10:29.770 lat (msec) : 2=0.02%, 4=0.05%, 10=0.98%, 20=59.69%, 50=39.04% 00:10:29.770 lat (msec) : 100=0.23% 00:10:29.770 cpu : usr=2.19%, sys=10.77%, ctx=288, majf=0, minf=4 00:10:29.770 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:10:29.770 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.770 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:29.770 issued rwts: total=3068,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:29.770 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:29.770 job1: (groupid=0, jobs=1): err= 0: pid=68812: Mon Jul 15 08:23:21 2024 00:10:29.770 read: IOPS=5626, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1001msec) 00:10:29.770 slat (usec): min=5, max=2621, avg=83.19, stdev=377.55 00:10:29.770 clat (usec): min=8313, max=12930, avg=11134.56, stdev=533.58 00:10:29.770 lat (usec): min=9243, max=12944, avg=11217.75, stdev=386.47 00:10:29.770 clat percentiles (usec): 00:10:29.770 | 1.00th=[ 8848], 5.00th=[10421], 10.00th=[10683], 20.00th=[10814], 00:10:29.770 | 30.00th=[10945], 40.00th=[11076], 50.00th=[11207], 60.00th=[11338], 00:10:29.770 | 70.00th=[11338], 80.00th=[11600], 90.00th=[11731], 95.00th=[11863], 00:10:29.770 | 99.00th=[11994], 99.50th=[12125], 99.90th=[12125], 99.95th=[12256], 00:10:29.770 | 99.99th=[12911] 00:10:29.770 write: IOPS=5982, BW=23.4MiB/s (24.5MB/s)(23.4MiB/1001msec); 0 zone resets 00:10:29.770 slat (usec): min=10, max=4215, avg=81.25, stdev=328.51 00:10:29.770 clat (usec): min=295, max=13525, avg=10654.16, stdev=992.02 00:10:29.770 lat (usec): min=1765, max=13550, avg=10735.41, stdev=938.25 00:10:29.770 clat percentiles (usec): 00:10:29.770 | 1.00th=[ 5604], 5.00th=[ 9765], 10.00th=[10159], 20.00th=[10421], 00:10:29.770 | 30.00th=[10552], 40.00th=[10552], 50.00th=[10683], 60.00th=[10814], 00:10:29.770 | 70.00th=[10945], 80.00th=[11076], 90.00th=[11207], 95.00th=[11600], 00:10:29.770 | 99.00th=[13042], 99.50th=[13173], 99.90th=[13566], 99.95th=[13566], 00:10:29.770 | 99.99th=[13566] 00:10:29.770 bw ( KiB/s): min=24576, max=24576, per=36.63%, avg=24576.00, stdev= 0.00, samples=1 00:10:29.770 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=1 00:10:29.770 lat (usec) : 500=0.01% 00:10:29.770 lat (msec) : 2=0.05%, 4=0.22%, 10=4.62%, 20=95.09% 00:10:29.770 cpu : usr=4.30%, sys=16.70%, ctx=383, majf=0, minf=5 00:10:29.770 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:10:29.770 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.770 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:29.770 issued rwts: total=5632,5988,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:29.770 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:29.770 job2: (groupid=0, jobs=1): err= 0: pid=68817: Mon Jul 15 08:23:21 2024 00:10:29.770 read: IOPS=2037, BW=8151KiB/s (8347kB/s)(8192KiB/1005msec) 00:10:29.770 slat (usec): min=6, max=15773, avg=191.21, stdev=880.81 00:10:29.770 clat (usec): min=14257, max=58382, avg=25252.56, stdev=9308.03 00:10:29.770 lat (usec): min=14392, max=58411, avg=25443.78, stdev=9387.40 00:10:29.770 clat percentiles (usec): 00:10:29.770 | 1.00th=[15664], 5.00th=[16319], 10.00th=[16581], 20.00th=[17171], 00:10:29.770 | 30.00th=[17433], 40.00th=[20317], 50.00th=[23462], 60.00th=[25035], 00:10:29.770 | 70.00th=[26346], 80.00th=[31851], 90.00th=[41681], 95.00th=[44303], 00:10:29.770 | 99.00th=[53740], 99.50th=[53740], 99.90th=[57934], 99.95th=[58459], 00:10:29.770 | 99.99th=[58459] 00:10:29.770 write: IOPS=2470, BW=9883KiB/s (10.1MB/s)(9932KiB/1005msec); 0 zone resets 00:10:29.770 slat (usec): min=13, max=14376, avg=235.88, stdev=956.04 00:10:29.770 clat (usec): min=3977, max=72623, avg=29783.09, stdev=11905.15 00:10:29.770 lat (usec): min=4025, max=72675, avg=30018.97, stdev=11982.36 00:10:29.770 clat percentiles (usec): 00:10:29.770 | 1.00th=[11600], 5.00th=[17957], 10.00th=[18220], 20.00th=[20317], 00:10:29.770 | 30.00th=[23462], 40.00th=[24249], 50.00th=[25822], 60.00th=[28967], 00:10:29.770 | 70.00th=[32637], 80.00th=[38536], 90.00th=[42206], 95.00th=[55837], 00:10:29.770 | 99.00th=[70779], 99.50th=[71828], 99.90th=[72877], 99.95th=[72877], 00:10:29.770 | 99.99th=[72877] 00:10:29.770 bw ( KiB/s): min= 8656, max=10192, per=14.05%, avg=9424.00, stdev=1086.12, samples=2 00:10:29.770 iops : min= 2164, max= 2548, avg=2356.00, stdev=271.53, samples=2 00:10:29.770 lat (msec) : 4=0.02%, 10=0.20%, 20=26.64%, 50=69.01%, 100=4.13% 00:10:29.770 cpu : usr=3.19%, sys=7.47%, ctx=304, majf=0, minf=15 00:10:29.770 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:10:29.770 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.771 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:29.771 issued rwts: total=2048,2483,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:29.771 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:29.771 job3: (groupid=0, jobs=1): err= 0: pid=68819: Mon Jul 15 08:23:21 2024 00:10:29.771 read: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec) 00:10:29.771 slat (usec): min=5, max=3000, avg=92.06, stdev=430.88 00:10:29.771 clat (usec): min=9211, max=13404, avg=12357.51, stdev=583.00 00:10:29.771 lat (usec): min=11539, max=13463, avg=12449.57, stdev=398.75 00:10:29.771 clat percentiles (usec): 00:10:29.771 | 1.00th=[ 9765], 5.00th=[11600], 10.00th=[11731], 20.00th=[12125], 00:10:29.771 | 30.00th=[12256], 40.00th=[12387], 50.00th=[12387], 60.00th=[12518], 00:10:29.771 | 70.00th=[12649], 80.00th=[12780], 90.00th=[12911], 95.00th=[13042], 00:10:29.771 | 99.00th=[13304], 99.50th=[13304], 99.90th=[13435], 99.95th=[13435], 00:10:29.771 | 99.99th=[13435] 00:10:29.771 write: IOPS=5307, BW=20.7MiB/s (21.7MB/s)(20.8MiB/1001msec); 0 zone resets 00:10:29.771 slat (usec): min=10, max=4757, avg=91.75, stdev=382.45 00:10:29.771 clat (usec): min=216, max=14763, avg=11897.63, stdev=1095.19 00:10:29.771 lat (usec): min=2112, max=14786, avg=11989.38, stdev=1026.71 00:10:29.771 clat percentiles (usec): 00:10:29.771 | 1.00th=[ 6128], 5.00th=[11076], 10.00th=[11469], 20.00th=[11600], 00:10:29.771 | 30.00th=[11731], 40.00th=[11863], 50.00th=[11994], 60.00th=[12125], 00:10:29.771 | 70.00th=[12256], 80.00th=[12387], 90.00th=[12518], 95.00th=[12649], 00:10:29.771 | 99.00th=[14222], 99.50th=[14484], 99.90th=[14746], 99.95th=[14746], 00:10:29.771 | 99.99th=[14746] 00:10:29.771 bw ( KiB/s): min=20488, max=20488, per=30.54%, avg=20488.00, stdev= 0.00, samples=1 00:10:29.771 iops : min= 5122, max= 5122, avg=5122.00, stdev= 0.00, samples=1 00:10:29.771 lat (usec) : 250=0.01% 00:10:29.771 lat (msec) : 4=0.31%, 10=2.43%, 20=97.25% 00:10:29.771 cpu : usr=4.60%, sys=15.00%, ctx=366, majf=0, minf=7 00:10:29.771 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:29.771 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.771 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:29.771 issued rwts: total=5120,5313,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:29.771 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:29.771 00:10:29.771 Run status group 0 (all jobs): 00:10:29.771 READ: bw=61.7MiB/s (64.7MB/s), 8151KiB/s-22.0MiB/s (8347kB/s-23.0MB/s), io=62.0MiB (65.0MB), run=1001-1005msec 00:10:29.771 WRITE: bw=65.5MiB/s (68.7MB/s), 9883KiB/s-23.4MiB/s (10.1MB/s-24.5MB/s), io=65.8MiB (69.0MB), run=1001-1005msec 00:10:29.771 00:10:29.771 Disk stats (read/write): 00:10:29.771 nvme0n1: ios=2610/2775, merge=0/0, ticks=26954/23639, in_queue=50593, util=88.88% 00:10:29.771 nvme0n2: ios=5041/5120, merge=0/0, ticks=12490/11340, in_queue=23830, util=89.29% 00:10:29.771 nvme0n3: ios=1809/2048, merge=0/0, ticks=15032/19545, in_queue=34577, util=89.31% 00:10:29.771 nvme0n4: ios=4448/4608, merge=0/0, ticks=12332/11558, in_queue=23890, util=89.77% 00:10:29.771 08:23:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:29.771 [global] 00:10:29.771 thread=1 00:10:29.771 invalidate=1 00:10:29.771 rw=randwrite 00:10:29.771 time_based=1 00:10:29.771 runtime=1 00:10:29.771 ioengine=libaio 00:10:29.771 direct=1 00:10:29.771 bs=4096 00:10:29.771 iodepth=128 00:10:29.771 norandommap=0 00:10:29.771 numjobs=1 00:10:29.771 00:10:29.771 verify_dump=1 00:10:29.771 verify_backlog=512 00:10:29.771 verify_state_save=0 00:10:29.771 do_verify=1 00:10:29.771 verify=crc32c-intel 00:10:29.771 [job0] 00:10:29.771 filename=/dev/nvme0n1 00:10:29.771 [job1] 00:10:29.771 filename=/dev/nvme0n2 00:10:29.771 [job2] 00:10:29.771 filename=/dev/nvme0n3 00:10:29.771 [job3] 00:10:29.771 filename=/dev/nvme0n4 00:10:29.771 Could not set queue depth (nvme0n1) 00:10:29.771 Could not set queue depth (nvme0n2) 00:10:29.771 Could not set queue depth (nvme0n3) 00:10:29.771 Could not set queue depth (nvme0n4) 00:10:29.771 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:29.771 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:29.771 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:29.771 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:29.771 fio-3.35 00:10:29.771 Starting 4 threads 00:10:31.247 00:10:31.247 job0: (groupid=0, jobs=1): err= 0: pid=68872: Mon Jul 15 08:23:23 2024 00:10:31.247 read: IOPS=1007, BW=4031KiB/s (4128kB/s)(4096KiB/1016msec) 00:10:31.247 slat (usec): min=5, max=25561, avg=387.41, stdev=1814.84 00:10:31.247 clat (usec): min=29525, max=78385, avg=49552.03, stdev=10269.66 00:10:31.247 lat (usec): min=29540, max=78398, avg=49939.44, stdev=10414.96 00:10:31.247 clat percentiles (usec): 00:10:31.248 | 1.00th=[29754], 5.00th=[38536], 10.00th=[40109], 20.00th=[40633], 00:10:31.248 | 30.00th=[40633], 40.00th=[42206], 50.00th=[45876], 60.00th=[49546], 00:10:31.248 | 70.00th=[58459], 80.00th=[60556], 90.00th=[62653], 95.00th=[65274], 00:10:31.248 | 99.00th=[73925], 99.50th=[73925], 99.90th=[73925], 99.95th=[78119], 00:10:31.248 | 99.99th=[78119] 00:10:31.248 write: IOPS=1214, BW=4858KiB/s (4975kB/s)(4936KiB/1016msec); 0 zone resets 00:10:31.248 slat (usec): min=4, max=15257, avg=483.58, stdev=1812.68 00:10:31.248 clat (msec): min=13, max=107, avg=62.30, stdev=29.03 00:10:31.248 lat (msec): min=18, max=111, avg=62.79, stdev=29.20 00:10:31.248 clat percentiles (msec): 00:10:31.248 | 1.00th=[ 23], 5.00th=[ 27], 10.00th=[ 28], 20.00th=[ 34], 00:10:31.248 | 30.00th=[ 37], 40.00th=[ 39], 50.00th=[ 62], 60.00th=[ 84], 00:10:31.248 | 70.00th=[ 92], 80.00th=[ 96], 90.00th=[ 97], 95.00th=[ 97], 00:10:31.248 | 99.00th=[ 101], 99.50th=[ 104], 99.90th=[ 107], 99.95th=[ 108], 00:10:31.248 | 99.99th=[ 108] 00:10:31.248 bw ( KiB/s): min= 3296, max= 5552, per=8.47%, avg=4424.00, stdev=1595.23, samples=2 00:10:31.248 iops : min= 824, max= 1388, avg=1106.00, stdev=398.81, samples=2 00:10:31.248 lat (msec) : 20=0.31%, 50=52.88%, 100=46.19%, 250=0.62% 00:10:31.248 cpu : usr=1.28%, sys=2.86%, ctx=325, majf=0, minf=15 00:10:31.248 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.4%, >=64=97.2% 00:10:31.248 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.248 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:31.248 issued rwts: total=1024,1234,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:31.248 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:31.248 job1: (groupid=0, jobs=1): err= 0: pid=68873: Mon Jul 15 08:23:23 2024 00:10:31.248 read: IOPS=4160, BW=16.3MiB/s (17.0MB/s)(16.3MiB/1003msec) 00:10:31.248 slat (usec): min=8, max=7910, avg=129.22, stdev=623.00 00:10:31.248 clat (usec): min=2351, max=48813, avg=16687.08, stdev=5698.54 00:10:31.248 lat (usec): min=2365, max=48834, avg=16816.30, stdev=5732.50 00:10:31.248 clat percentiles (usec): 00:10:31.248 | 1.00th=[ 8979], 5.00th=[11863], 10.00th=[12649], 20.00th=[13435], 00:10:31.248 | 30.00th=[13698], 40.00th=[13829], 50.00th=[14222], 60.00th=[15664], 00:10:31.248 | 70.00th=[19530], 80.00th=[20055], 90.00th=[20579], 95.00th=[22938], 00:10:31.248 | 99.00th=[42206], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:10:31.248 | 99.99th=[49021] 00:10:31.248 write: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets 00:10:31.248 slat (usec): min=11, max=12550, avg=92.03, stdev=567.60 00:10:31.248 clat (usec): min=6951, max=37617, avg=12408.48, stdev=3881.02 00:10:31.248 lat (usec): min=6973, max=37662, avg=12500.51, stdev=3933.04 00:10:31.248 clat percentiles (usec): 00:10:31.248 | 1.00th=[ 8160], 5.00th=[ 9241], 10.00th=[ 9634], 20.00th=[ 9896], 00:10:31.248 | 30.00th=[10028], 40.00th=[10290], 50.00th=[10552], 60.00th=[11207], 00:10:31.248 | 70.00th=[13304], 80.00th=[14484], 90.00th=[18744], 95.00th=[19792], 00:10:31.248 | 99.00th=[29754], 99.50th=[30016], 99.90th=[30278], 99.95th=[30540], 00:10:31.248 | 99.99th=[37487] 00:10:31.248 bw ( KiB/s): min=16384, max=20080, per=34.90%, avg=18232.00, stdev=2613.47, samples=2 00:10:31.248 iops : min= 4096, max= 5020, avg=4558.00, stdev=653.37, samples=2 00:10:31.248 lat (msec) : 4=0.15%, 10=14.04%, 20=75.52%, 50=10.29% 00:10:31.248 cpu : usr=3.99%, sys=12.38%, ctx=266, majf=0, minf=5 00:10:31.248 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:31.248 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.248 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:31.248 issued rwts: total=4173,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:31.248 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:31.248 job2: (groupid=0, jobs=1): err= 0: pid=68874: Mon Jul 15 08:23:23 2024 00:10:31.248 read: IOPS=5941, BW=23.2MiB/s (24.3MB/s)(23.3MiB/1003msec) 00:10:31.248 slat (usec): min=5, max=5609, avg=77.90, stdev=471.00 00:10:31.248 clat (usec): min=1330, max=17720, avg=10931.93, stdev=1305.81 00:10:31.248 lat (usec): min=4895, max=20910, avg=11009.82, stdev=1321.35 00:10:31.248 clat percentiles (usec): 00:10:31.248 | 1.00th=[ 5932], 5.00th=[ 8356], 10.00th=[10290], 20.00th=[10552], 00:10:31.248 | 30.00th=[10683], 40.00th=[10814], 50.00th=[10945], 60.00th=[11076], 00:10:31.248 | 70.00th=[11207], 80.00th=[11469], 90.00th=[11731], 95.00th=[12125], 00:10:31.248 | 99.00th=[16909], 99.50th=[16909], 99.90th=[17433], 99.95th=[17695], 00:10:31.248 | 99.99th=[17695] 00:10:31.248 write: IOPS=6125, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1003msec); 0 zone resets 00:10:31.248 slat (usec): min=9, max=4594, avg=79.84, stdev=445.28 00:10:31.248 clat (usec): min=5511, max=13252, avg=10064.07, stdev=817.83 00:10:31.248 lat (usec): min=7371, max=13367, avg=10143.91, stdev=709.38 00:10:31.248 clat percentiles (usec): 00:10:31.248 | 1.00th=[ 6783], 5.00th=[ 9110], 10.00th=[ 9241], 20.00th=[ 9503], 00:10:31.248 | 30.00th=[ 9765], 40.00th=[ 9896], 50.00th=[10159], 60.00th=[10290], 00:10:31.248 | 70.00th=[10421], 80.00th=[10552], 90.00th=[10814], 95.00th=[11076], 00:10:31.248 | 99.00th=[12780], 99.50th=[12911], 99.90th=[13173], 99.95th=[13173], 00:10:31.248 | 99.99th=[13304] 00:10:31.248 bw ( KiB/s): min=24576, max=24625, per=47.09%, avg=24600.50, stdev=34.65, samples=2 00:10:31.248 iops : min= 6144, max= 6156, avg=6150.00, stdev= 8.49, samples=2 00:10:31.248 lat (msec) : 2=0.01%, 10=25.18%, 20=74.81% 00:10:31.248 cpu : usr=5.69%, sys=15.47%, ctx=259, majf=0, minf=14 00:10:31.248 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:10:31.248 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.248 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:31.248 issued rwts: total=5959,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:31.248 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:31.248 job3: (groupid=0, jobs=1): err= 0: pid=68875: Mon Jul 15 08:23:23 2024 00:10:31.248 read: IOPS=1009, BW=4039KiB/s (4136kB/s)(4096KiB/1014msec) 00:10:31.248 slat (usec): min=5, max=22328, avg=391.05, stdev=1767.93 00:10:31.248 clat (usec): min=23833, max=86740, avg=48418.50, stdev=10982.19 00:10:31.248 lat (usec): min=28110, max=86755, avg=48809.55, stdev=11077.49 00:10:31.248 clat percentiles (usec): 00:10:31.248 | 1.00th=[29492], 5.00th=[32637], 10.00th=[36439], 20.00th=[39584], 00:10:31.248 | 30.00th=[40109], 40.00th=[41157], 50.00th=[45351], 60.00th=[49021], 00:10:31.248 | 70.00th=[57410], 80.00th=[59507], 90.00th=[64750], 95.00th=[65799], 00:10:31.248 | 99.00th=[69731], 99.50th=[70779], 99.90th=[81265], 99.95th=[86508], 00:10:31.248 | 99.99th=[86508] 00:10:31.248 write: IOPS=1264, BW=5057KiB/s (5179kB/s)(5128KiB/1014msec); 0 zone resets 00:10:31.248 slat (usec): min=4, max=34571, avg=461.41, stdev=2507.01 00:10:31.248 clat (msec): min=11, max=129, avg=60.45, stdev=29.81 00:10:31.248 lat (msec): min=13, max=131, avg=60.91, stdev=30.03 00:10:31.248 clat percentiles (msec): 00:10:31.248 | 1.00th=[ 14], 5.00th=[ 25], 10.00th=[ 27], 20.00th=[ 30], 00:10:31.248 | 30.00th=[ 33], 40.00th=[ 39], 50.00th=[ 53], 60.00th=[ 78], 00:10:31.248 | 70.00th=[ 92], 80.00th=[ 95], 90.00th=[ 97], 95.00th=[ 97], 00:10:31.248 | 99.00th=[ 103], 99.50th=[ 108], 99.90th=[ 123], 99.95th=[ 130], 00:10:31.248 | 99.99th=[ 130] 00:10:31.248 bw ( KiB/s): min= 3584, max= 5648, per=8.84%, avg=4616.00, stdev=1459.47, samples=2 00:10:31.248 iops : min= 896, max= 1412, avg=1154.00, stdev=364.87, samples=2 00:10:31.248 lat (msec) : 20=2.21%, 50=52.30%, 100=44.36%, 250=1.13% 00:10:31.248 cpu : usr=1.28%, sys=2.96%, ctx=293, majf=0, minf=9 00:10:31.248 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.4%, >=64=97.3% 00:10:31.248 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.248 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:31.248 issued rwts: total=1024,1282,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:31.248 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:31.248 00:10:31.248 Run status group 0 (all jobs): 00:10:31.248 READ: bw=46.8MiB/s (49.1MB/s), 4031KiB/s-23.2MiB/s (4128kB/s-24.3MB/s), io=47.6MiB (49.9MB), run=1003-1016msec 00:10:31.248 WRITE: bw=51.0MiB/s (53.5MB/s), 4858KiB/s-23.9MiB/s (4975kB/s-25.1MB/s), io=51.8MiB (54.3MB), run=1003-1016msec 00:10:31.248 00:10:31.248 Disk stats (read/write): 00:10:31.248 nvme0n1: ios=967/1024, merge=0/0, ticks=21730/30497, in_queue=52227, util=87.86% 00:10:31.248 nvme0n2: ios=3633/3746, merge=0/0, ticks=30653/19293, in_queue=49946, util=89.18% 00:10:31.248 nvme0n3: ios=5112/5192, merge=0/0, ticks=52586/47885, in_queue=100471, util=89.18% 00:10:31.248 nvme0n4: ios=936/1024, merge=0/0, ticks=21829/30500, in_queue=52329, util=89.21% 00:10:31.248 08:23:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:31.248 08:23:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=68892 00:10:31.248 08:23:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:31.248 08:23:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:31.248 [global] 00:10:31.248 thread=1 00:10:31.248 invalidate=1 00:10:31.248 rw=read 00:10:31.248 time_based=1 00:10:31.248 runtime=10 00:10:31.248 ioengine=libaio 00:10:31.248 direct=1 00:10:31.248 bs=4096 00:10:31.248 iodepth=1 00:10:31.248 norandommap=1 00:10:31.248 numjobs=1 00:10:31.248 00:10:31.248 [job0] 00:10:31.248 filename=/dev/nvme0n1 00:10:31.248 [job1] 00:10:31.248 filename=/dev/nvme0n2 00:10:31.248 [job2] 00:10:31.248 filename=/dev/nvme0n3 00:10:31.248 [job3] 00:10:31.248 filename=/dev/nvme0n4 00:10:31.248 Could not set queue depth (nvme0n1) 00:10:31.248 Could not set queue depth (nvme0n2) 00:10:31.248 Could not set queue depth (nvme0n3) 00:10:31.248 Could not set queue depth (nvme0n4) 00:10:31.248 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:31.248 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:31.248 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:31.248 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:31.248 fio-3.35 00:10:31.248 Starting 4 threads 00:10:34.531 08:23:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:34.531 fio: pid=68936, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:34.531 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=50327552, buflen=4096 00:10:34.531 08:23:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:34.531 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=68231168, buflen=4096 00:10:34.531 fio: pid=68935, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:34.531 08:23:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:34.531 08:23:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:34.790 fio: pid=68933, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:34.790 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=63102976, buflen=4096 00:10:34.790 08:23:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:34.790 08:23:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:35.048 fio: pid=68934, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:35.048 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=16285696, buflen=4096 00:10:35.048 00:10:35.048 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=68933: Mon Jul 15 08:23:27 2024 00:10:35.048 read: IOPS=4481, BW=17.5MiB/s (18.4MB/s)(60.2MiB/3438msec) 00:10:35.048 slat (usec): min=8, max=14762, avg=15.04, stdev=165.80 00:10:35.048 clat (usec): min=108, max=2985, avg=206.76, stdev=61.63 00:10:35.048 lat (usec): min=143, max=14922, avg=221.81, stdev=177.15 00:10:35.048 clat percentiles (usec): 00:10:35.048 | 1.00th=[ 139], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 155], 00:10:35.048 | 30.00th=[ 161], 40.00th=[ 169], 50.00th=[ 231], 60.00th=[ 239], 00:10:35.048 | 70.00th=[ 245], 80.00th=[ 251], 90.00th=[ 260], 95.00th=[ 265], 00:10:35.048 | 99.00th=[ 277], 99.50th=[ 281], 99.90th=[ 363], 99.95th=[ 938], 00:10:35.048 | 99.99th=[ 2212] 00:10:35.048 bw ( KiB/s): min=15200, max=22698, per=24.79%, avg=17360.33, stdev=3236.43, samples=6 00:10:35.048 iops : min= 3800, max= 5674, avg=4340.00, stdev=808.94, samples=6 00:10:35.048 lat (usec) : 250=77.98%, 500=21.95%, 1000=0.01% 00:10:35.048 lat (msec) : 2=0.03%, 4=0.01% 00:10:35.048 cpu : usr=1.16%, sys=5.18%, ctx=15412, majf=0, minf=1 00:10:35.048 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:35.049 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:35.049 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:35.049 issued rwts: total=15407,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:35.049 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:35.049 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=68934: Mon Jul 15 08:23:27 2024 00:10:35.049 read: IOPS=5507, BW=21.5MiB/s (22.6MB/s)(79.5MiB/3697msec) 00:10:35.049 slat (usec): min=11, max=10713, avg=16.62, stdev=144.18 00:10:35.049 clat (usec): min=32, max=7709, avg=163.58, stdev=80.35 00:10:35.049 lat (usec): min=145, max=10984, avg=180.20, stdev=165.90 00:10:35.049 clat percentiles (usec): 00:10:35.049 | 1.00th=[ 141], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 151], 00:10:35.049 | 30.00th=[ 155], 40.00th=[ 157], 50.00th=[ 159], 60.00th=[ 163], 00:10:35.049 | 70.00th=[ 165], 80.00th=[ 169], 90.00th=[ 176], 95.00th=[ 180], 00:10:35.049 | 99.00th=[ 196], 99.50th=[ 281], 99.90th=[ 807], 99.95th=[ 1123], 00:10:35.049 | 99.99th=[ 4047] 00:10:35.049 bw ( KiB/s): min=20912, max=22800, per=31.50%, avg=22055.14, stdev=732.97, samples=7 00:10:35.049 iops : min= 5228, max= 5700, avg=5513.57, stdev=183.38, samples=7 00:10:35.049 lat (usec) : 50=0.01%, 250=99.44%, 500=0.28%, 750=0.14%, 1000=0.07% 00:10:35.049 lat (msec) : 2=0.03%, 4=0.02%, 10=0.01% 00:10:35.049 cpu : usr=1.46%, sys=6.76%, ctx=20370, majf=0, minf=1 00:10:35.049 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:35.049 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:35.049 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:35.049 issued rwts: total=20361,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:35.049 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:35.049 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=68935: Mon Jul 15 08:23:27 2024 00:10:35.049 read: IOPS=5230, BW=20.4MiB/s (21.4MB/s)(65.1MiB/3185msec) 00:10:35.049 slat (usec): min=11, max=11136, avg=14.73, stdev=109.82 00:10:35.049 clat (usec): min=139, max=3773, avg=175.03, stdev=35.79 00:10:35.049 lat (usec): min=154, max=11329, avg=189.76, stdev=115.69 00:10:35.049 clat percentiles (usec): 00:10:35.049 | 1.00th=[ 151], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 163], 00:10:35.049 | 30.00th=[ 167], 40.00th=[ 172], 50.00th=[ 174], 60.00th=[ 178], 00:10:35.049 | 70.00th=[ 180], 80.00th=[ 186], 90.00th=[ 192], 95.00th=[ 196], 00:10:35.049 | 99.00th=[ 208], 99.50th=[ 215], 99.90th=[ 273], 99.95th=[ 478], 00:10:35.049 | 99.99th=[ 1811] 00:10:35.049 bw ( KiB/s): min=20384, max=21280, per=29.97%, avg=20982.33, stdev=383.20, samples=6 00:10:35.049 iops : min= 5096, max= 5320, avg=5245.50, stdev=95.90, samples=6 00:10:35.049 lat (usec) : 250=99.86%, 500=0.08%, 750=0.02%, 1000=0.01% 00:10:35.049 lat (msec) : 2=0.01%, 4=0.01% 00:10:35.049 cpu : usr=1.51%, sys=6.44%, ctx=16665, majf=0, minf=1 00:10:35.049 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:35.049 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:35.049 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:35.049 issued rwts: total=16659,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:35.049 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:35.049 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=68936: Mon Jul 15 08:23:27 2024 00:10:35.049 read: IOPS=4162, BW=16.3MiB/s (17.0MB/s)(48.0MiB/2952msec) 00:10:35.049 slat (nsec): min=8713, max=72290, avg=13969.30, stdev=3243.32 00:10:35.049 clat (usec): min=144, max=7440, avg=224.73, stdev=111.07 00:10:35.049 lat (usec): min=158, max=7453, avg=238.70, stdev=110.88 00:10:35.049 clat percentiles (usec): 00:10:35.049 | 1.00th=[ 157], 5.00th=[ 163], 10.00th=[ 169], 20.00th=[ 178], 00:10:35.049 | 30.00th=[ 192], 40.00th=[ 231], 50.00th=[ 237], 60.00th=[ 243], 00:10:35.049 | 70.00th=[ 247], 80.00th=[ 251], 90.00th=[ 260], 95.00th=[ 265], 00:10:35.049 | 99.00th=[ 273], 99.50th=[ 281], 99.90th=[ 396], 99.95th=[ 2147], 00:10:35.049 | 99.99th=[ 6259] 00:10:35.049 bw ( KiB/s): min=15360, max=20774, per=24.18%, avg=16929.20, stdev=2375.50, samples=5 00:10:35.049 iops : min= 3840, max= 5193, avg=4232.20, stdev=593.67, samples=5 00:10:35.049 lat (usec) : 250=77.21%, 500=22.71%, 750=0.02% 00:10:35.049 lat (msec) : 2=0.01%, 4=0.04%, 10=0.02% 00:10:35.049 cpu : usr=1.19%, sys=5.49%, ctx=12288, majf=0, minf=1 00:10:35.049 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:35.049 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:35.049 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:35.049 issued rwts: total=12288,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:35.049 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:35.049 00:10:35.049 Run status group 0 (all jobs): 00:10:35.049 READ: bw=68.4MiB/s (71.7MB/s), 16.3MiB/s-21.5MiB/s (17.0MB/s-22.6MB/s), io=253MiB (265MB), run=2952-3697msec 00:10:35.049 00:10:35.049 Disk stats (read/write): 00:10:35.049 nvme0n1: ios=15010/0, merge=0/0, ticks=3048/0, in_queue=3048, util=95.31% 00:10:35.049 nvme0n2: ios=19906/0, merge=0/0, ticks=3318/0, in_queue=3318, util=95.48% 00:10:35.049 nvme0n3: ios=16315/0, merge=0/0, ticks=2892/0, in_queue=2892, util=96.15% 00:10:35.049 nvme0n4: ios=11984/0, merge=0/0, ticks=2665/0, in_queue=2665, util=96.29% 00:10:35.049 08:23:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:35.049 08:23:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:35.306 08:23:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:35.306 08:23:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:35.564 08:23:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:35.564 08:23:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:35.822 08:23:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:35.822 08:23:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:36.079 08:23:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:36.079 08:23:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:36.338 08:23:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:36.338 08:23:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 68892 00:10:36.338 08:23:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:36.338 08:23:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:36.338 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:36.338 08:23:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:36.338 08:23:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:10:36.338 08:23:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:36.338 08:23:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:36.338 08:23:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:36.338 08:23:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:36.338 nvmf hotplug test: fio failed as expected 00:10:36.338 08:23:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:10:36.338 08:23:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:36.338 08:23:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:36.338 08:23:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:36.596 08:23:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:36.853 08:23:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:36.853 08:23:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:36.853 08:23:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:36.853 08:23:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:36.853 08:23:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:36.853 08:23:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:10:36.853 08:23:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:36.853 08:23:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:10:36.853 08:23:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:36.853 08:23:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:36.853 rmmod nvme_tcp 00:10:36.853 rmmod nvme_fabrics 00:10:36.853 rmmod nvme_keyring 00:10:36.853 08:23:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:36.853 08:23:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:10:36.853 08:23:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:10:36.853 08:23:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 68506 ']' 00:10:36.853 08:23:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 68506 00:10:36.853 08:23:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 68506 ']' 00:10:36.853 08:23:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 68506 00:10:36.853 08:23:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:10:36.853 08:23:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:36.853 08:23:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68506 00:10:36.853 killing process with pid 68506 00:10:36.853 08:23:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:36.853 08:23:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:36.853 08:23:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68506' 00:10:36.854 08:23:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 68506 00:10:36.854 08:23:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 68506 00:10:37.111 08:23:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:37.111 08:23:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:37.111 08:23:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:37.111 08:23:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:37.111 08:23:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:37.111 08:23:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:37.111 08:23:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:37.111 08:23:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:37.111 08:23:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:37.111 00:10:37.111 real 0m19.356s 00:10:37.111 user 1m12.575s 00:10:37.111 sys 0m10.436s 00:10:37.111 08:23:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:37.111 08:23:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:37.111 ************************************ 00:10:37.111 END TEST nvmf_fio_target 00:10:37.111 ************************************ 00:10:37.111 08:23:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:37.111 08:23:29 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:37.111 08:23:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:37.111 08:23:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:37.111 08:23:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:37.111 ************************************ 00:10:37.111 START TEST nvmf_bdevio 00:10:37.111 ************************************ 00:10:37.111 08:23:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:37.111 * Looking for test storage... 00:10:37.111 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:37.111 08:23:29 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:37.111 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:37.111 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:37.111 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:37.111 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:37.111 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:37.111 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:37.111 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:37.111 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:37.111 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:37.111 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:37.111 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:37.111 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:10:37.111 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:10:37.111 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:37.111 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:37.111 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:37.111 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:37.111 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:37.111 08:23:29 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:37.111 08:23:29 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:37.111 08:23:29 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:37.111 08:23:29 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.111 08:23:29 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.111 08:23:29 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.111 08:23:29 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:37.111 08:23:29 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.112 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:10:37.369 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:37.369 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:37.369 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:37.369 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:37.369 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:37.369 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:37.369 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:37.369 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:37.369 08:23:29 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:37.369 08:23:29 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:37.369 08:23:29 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:37.369 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:37.369 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:37.369 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:37.369 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:37.369 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:37.369 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:37.369 08:23:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:37.369 08:23:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:37.369 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:37.369 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:37.369 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:37.369 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:37.369 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:37.369 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:37.369 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:37.369 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:37.369 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:37.369 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:37.369 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:37.369 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:37.369 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:37.369 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:37.369 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:37.369 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:37.369 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:37.369 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:37.369 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:37.369 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:37.369 Cannot find device "nvmf_tgt_br" 00:10:37.369 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # true 00:10:37.369 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:37.369 Cannot find device "nvmf_tgt_br2" 00:10:37.369 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # true 00:10:37.369 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:37.369 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:37.369 Cannot find device "nvmf_tgt_br" 00:10:37.369 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # true 00:10:37.369 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:37.369 Cannot find device "nvmf_tgt_br2" 00:10:37.369 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # true 00:10:37.369 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:37.369 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:37.369 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:37.369 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:37.369 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:10:37.369 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:37.369 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:37.369 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:10:37.369 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:37.369 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:37.369 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:37.369 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:37.369 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:37.369 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:37.369 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:37.369 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:37.369 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:37.369 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:37.369 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:37.369 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:37.369 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:37.369 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:37.369 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:37.369 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:37.369 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:37.369 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:37.369 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:37.627 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:37.627 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:37.627 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:37.627 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:37.627 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:37.627 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:37.627 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.107 ms 00:10:37.627 00:10:37.627 --- 10.0.0.2 ping statistics --- 00:10:37.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:37.627 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:10:37.627 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:37.627 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:37.627 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:10:37.627 00:10:37.627 --- 10.0.0.3 ping statistics --- 00:10:37.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:37.627 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:10:37.627 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:37.627 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:37.627 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:10:37.627 00:10:37.627 --- 10.0.0.1 ping statistics --- 00:10:37.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:37.627 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:10:37.627 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:37.627 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@433 -- # return 0 00:10:37.627 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:37.627 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:37.627 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:37.627 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:37.627 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:37.627 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:37.627 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:37.627 08:23:29 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:37.627 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:37.627 08:23:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:37.627 08:23:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:37.627 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=69194 00:10:37.627 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:37.627 08:23:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 69194 00:10:37.627 08:23:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 69194 ']' 00:10:37.627 08:23:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:37.627 08:23:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:37.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:37.627 08:23:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:37.627 08:23:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:37.627 08:23:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:37.627 [2024-07-15 08:23:29.681305] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:37.627 [2024-07-15 08:23:29.681390] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:37.886 [2024-07-15 08:23:29.817078] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:37.886 [2024-07-15 08:23:29.960988] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:37.886 [2024-07-15 08:23:29.961043] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:37.886 [2024-07-15 08:23:29.961055] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:37.886 [2024-07-15 08:23:29.961063] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:37.886 [2024-07-15 08:23:29.961071] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:37.886 [2024-07-15 08:23:29.961239] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:10:37.886 [2024-07-15 08:23:29.961843] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:10:37.886 [2024-07-15 08:23:29.961954] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:10:37.886 [2024-07-15 08:23:29.961963] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:37.886 [2024-07-15 08:23:30.014249] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:38.821 08:23:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:38.821 08:23:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:10:38.821 08:23:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:38.821 08:23:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:38.821 08:23:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:38.821 08:23:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:38.821 08:23:30 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:38.821 08:23:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:38.821 08:23:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:38.821 [2024-07-15 08:23:30.706500] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:38.821 08:23:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:38.821 08:23:30 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:38.821 08:23:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:38.821 08:23:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:38.821 Malloc0 00:10:38.821 08:23:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:38.821 08:23:30 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:38.821 08:23:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:38.821 08:23:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:38.821 08:23:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:38.821 08:23:30 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:38.821 08:23:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:38.821 08:23:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:38.821 08:23:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:38.821 08:23:30 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:38.821 08:23:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:38.821 08:23:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:38.821 [2024-07-15 08:23:30.765880] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:38.821 08:23:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:38.821 08:23:30 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:38.821 08:23:30 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:38.821 08:23:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:10:38.821 08:23:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:10:38.821 08:23:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:38.821 08:23:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:38.821 { 00:10:38.821 "params": { 00:10:38.821 "name": "Nvme$subsystem", 00:10:38.821 "trtype": "$TEST_TRANSPORT", 00:10:38.821 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:38.821 "adrfam": "ipv4", 00:10:38.821 "trsvcid": "$NVMF_PORT", 00:10:38.822 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:38.822 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:38.822 "hdgst": ${hdgst:-false}, 00:10:38.822 "ddgst": ${ddgst:-false} 00:10:38.822 }, 00:10:38.822 "method": "bdev_nvme_attach_controller" 00:10:38.822 } 00:10:38.822 EOF 00:10:38.822 )") 00:10:38.822 08:23:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:10:38.822 08:23:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:10:38.822 08:23:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:10:38.822 08:23:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:38.822 "params": { 00:10:38.822 "name": "Nvme1", 00:10:38.822 "trtype": "tcp", 00:10:38.822 "traddr": "10.0.0.2", 00:10:38.822 "adrfam": "ipv4", 00:10:38.822 "trsvcid": "4420", 00:10:38.822 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:38.822 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:38.822 "hdgst": false, 00:10:38.822 "ddgst": false 00:10:38.822 }, 00:10:38.822 "method": "bdev_nvme_attach_controller" 00:10:38.822 }' 00:10:38.822 [2024-07-15 08:23:30.827860] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:38.822 [2024-07-15 08:23:30.827969] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69236 ] 00:10:38.822 [2024-07-15 08:23:30.969899] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:39.080 [2024-07-15 08:23:31.100768] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:39.080 [2024-07-15 08:23:31.100922] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.080 [2024-07-15 08:23:31.101628] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:39.080 [2024-07-15 08:23:31.165581] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:39.339 I/O targets: 00:10:39.339 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:39.339 00:10:39.339 00:10:39.339 CUnit - A unit testing framework for C - Version 2.1-3 00:10:39.339 http://cunit.sourceforge.net/ 00:10:39.339 00:10:39.339 00:10:39.339 Suite: bdevio tests on: Nvme1n1 00:10:39.339 Test: blockdev write read block ...passed 00:10:39.339 Test: blockdev write zeroes read block ...passed 00:10:39.339 Test: blockdev write zeroes read no split ...passed 00:10:39.339 Test: blockdev write zeroes read split ...passed 00:10:39.339 Test: blockdev write zeroes read split partial ...passed 00:10:39.339 Test: blockdev reset ...[2024-07-15 08:23:31.321100] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:10:39.339 [2024-07-15 08:23:31.321243] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbc77c0 (9): Bad file descriptor 00:10:39.339 [2024-07-15 08:23:31.336443] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:39.339 passed 00:10:39.339 Test: blockdev write read 8 blocks ...passed 00:10:39.339 Test: blockdev write read size > 128k ...passed 00:10:39.339 Test: blockdev write read invalid size ...passed 00:10:39.339 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:39.339 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:39.339 Test: blockdev write read max offset ...passed 00:10:39.339 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:39.339 Test: blockdev writev readv 8 blocks ...passed 00:10:39.339 Test: blockdev writev readv 30 x 1block ...passed 00:10:39.339 Test: blockdev writev readv block ...passed 00:10:39.339 Test: blockdev writev readv size > 128k ...passed 00:10:39.339 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:39.339 Test: blockdev comparev and writev ...[2024-07-15 08:23:31.345803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:39.339 [2024-07-15 08:23:31.346066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:39.339 [2024-07-15 08:23:31.346175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:39.339 [2024-07-15 08:23:31.346267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:39.339 [2024-07-15 08:23:31.346734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:39.339 [2024-07-15 08:23:31.346874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:39.339 [2024-07-15 08:23:31.346977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:39.339 [2024-07-15 08:23:31.347067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:39.339 [2024-07-15 08:23:31.347465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:39.339 [2024-07-15 08:23:31.347582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:39.339 [2024-07-15 08:23:31.347680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:39.339 [2024-07-15 08:23:31.347815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:39.339 [2024-07-15 08:23:31.348223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:39.339 [2024-07-15 08:23:31.348337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:39.339 [2024-07-15 08:23:31.348425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:39.339 [2024-07-15 08:23:31.348515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:39.339 passed 00:10:39.339 Test: blockdev nvme passthru rw ...passed 00:10:39.339 Test: blockdev nvme passthru vendor specific ...[2024-07-15 08:23:31.349412] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:39.339 [2024-07-15 08:23:31.349507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:39.339 [2024-07-15 08:23:31.349688] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:39.339 [2024-07-15 08:23:31.349800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:39.339 [2024-07-15 08:23:31.349988] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:39.339 [2024-07-15 08:23:31.350079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:39.339 [2024-07-15 08:23:31.350251] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:39.339 [2024-07-15 08:23:31.350341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:39.339 passed 00:10:39.339 Test: blockdev nvme admin passthru ...passed 00:10:39.339 Test: blockdev copy ...passed 00:10:39.339 00:10:39.339 Run Summary: Type Total Ran Passed Failed Inactive 00:10:39.339 suites 1 1 n/a 0 0 00:10:39.339 tests 23 23 23 0 0 00:10:39.339 asserts 152 152 152 0 n/a 00:10:39.339 00:10:39.339 Elapsed time = 0.145 seconds 00:10:39.598 08:23:31 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:39.598 08:23:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:39.598 08:23:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:39.598 08:23:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:39.598 08:23:31 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:39.598 08:23:31 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:39.598 08:23:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:39.598 08:23:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:10:39.598 08:23:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:39.598 08:23:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:10:39.598 08:23:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:39.598 08:23:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:39.598 rmmod nvme_tcp 00:10:39.598 rmmod nvme_fabrics 00:10:39.598 rmmod nvme_keyring 00:10:39.598 08:23:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:39.598 08:23:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:10:39.598 08:23:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:10:39.598 08:23:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 69194 ']' 00:10:39.598 08:23:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 69194 00:10:39.598 08:23:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 69194 ']' 00:10:39.598 08:23:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 69194 00:10:39.598 08:23:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:10:39.598 08:23:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:39.598 08:23:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 69194 00:10:39.598 08:23:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:10:39.598 08:23:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:10:39.598 killing process with pid 69194 00:10:39.598 08:23:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 69194' 00:10:39.598 08:23:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 69194 00:10:39.598 08:23:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 69194 00:10:39.857 08:23:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:39.857 08:23:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:39.857 08:23:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:39.857 08:23:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:39.857 08:23:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:39.857 08:23:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:39.857 08:23:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:39.857 08:23:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:39.857 08:23:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:39.857 00:10:39.857 real 0m2.796s 00:10:39.857 user 0m9.256s 00:10:39.857 sys 0m0.755s 00:10:39.857 08:23:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:39.857 08:23:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:39.857 ************************************ 00:10:39.857 END TEST nvmf_bdevio 00:10:39.857 ************************************ 00:10:39.857 08:23:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:39.857 08:23:32 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:10:39.857 08:23:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:39.857 08:23:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:39.857 08:23:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:39.857 ************************************ 00:10:39.857 START TEST nvmf_auth_target 00:10:39.857 ************************************ 00:10:39.857 08:23:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:10:40.116 * Looking for test storage... 00:10:40.116 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:40.116 08:23:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:40.116 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:10:40.116 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:40.116 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:40.116 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:40.116 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:40.116 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:40.116 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:40.116 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:40.116 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:40.116 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:40.116 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:40.116 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:10:40.116 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:10:40.116 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:40.116 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:40.116 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:40.116 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:40.116 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:40.116 08:23:32 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:40.116 08:23:32 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:40.116 08:23:32 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:40.116 08:23:32 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.116 08:23:32 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.116 08:23:32 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.116 08:23:32 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:10:40.116 08:23:32 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.116 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:10:40.116 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:40.116 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:40.116 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:40.116 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:40.116 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:40.116 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:40.116 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:40.116 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:40.116 08:23:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:10:40.116 08:23:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:10:40.116 08:23:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:10:40.116 08:23:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:10:40.116 08:23:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:10:40.116 08:23:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:10:40.116 08:23:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:10:40.116 08:23:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:10:40.116 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:40.116 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:40.116 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:40.116 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:40.116 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:40.116 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:40.116 08:23:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:40.116 08:23:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:40.116 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:40.116 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:40.116 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:40.116 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:40.116 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:40.116 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:40.116 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:40.116 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:40.116 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:40.116 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:40.116 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:40.116 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:40.116 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:40.116 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:40.117 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:40.117 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:40.117 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:40.117 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:40.117 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:40.117 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:40.117 Cannot find device "nvmf_tgt_br" 00:10:40.117 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # true 00:10:40.117 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:40.117 Cannot find device "nvmf_tgt_br2" 00:10:40.117 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # true 00:10:40.117 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:40.117 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:40.117 Cannot find device "nvmf_tgt_br" 00:10:40.117 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # true 00:10:40.117 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:40.117 Cannot find device "nvmf_tgt_br2" 00:10:40.117 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # true 00:10:40.117 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:40.117 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:40.117 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:40.117 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:40.117 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:10:40.117 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:40.117 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:40.117 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:10:40.117 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:40.117 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:40.117 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:40.117 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:40.376 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:40.376 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:40.376 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:40.376 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:40.376 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:40.376 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:40.376 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:40.376 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:40.376 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:40.376 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:40.376 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:40.376 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:40.376 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:40.376 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:40.376 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:40.376 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:40.376 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:40.376 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:40.376 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:40.376 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:40.376 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:40.376 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:10:40.376 00:10:40.376 --- 10.0.0.2 ping statistics --- 00:10:40.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.376 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:10:40.376 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:40.376 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:40.376 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:10:40.376 00:10:40.376 --- 10.0.0.3 ping statistics --- 00:10:40.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.376 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:10:40.376 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:40.376 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:40.376 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:10:40.376 00:10:40.376 --- 10.0.0.1 ping statistics --- 00:10:40.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.376 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:10:40.376 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:40.376 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@433 -- # return 0 00:10:40.376 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:40.376 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:40.376 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:40.376 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:40.376 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:40.376 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:40.376 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:40.376 08:23:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:10:40.376 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:40.376 08:23:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:40.376 08:23:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:40.376 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=69410 00:10:40.376 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 69410 00:10:40.376 08:23:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 69410 ']' 00:10:40.376 08:23:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:10:40.376 08:23:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:40.376 08:23:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:40.376 08:23:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:40.376 08:23:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:40.376 08:23:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.749 08:23:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:41.749 08:23:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:10:41.749 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:41.749 08:23:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:41.749 08:23:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.749 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:41.749 08:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=69442 00:10:41.749 08:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:10:41.749 08:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:10:41.749 08:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:10:41.749 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:41.749 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:41.749 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:41.749 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:10:41.749 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:10:41.749 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:41.749 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=b3a2a3a10e6253698206a307516565f4608a2a42c3e13be6 00:10:41.749 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:10:41.749 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.UPM 00:10:41.749 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key b3a2a3a10e6253698206a307516565f4608a2a42c3e13be6 0 00:10:41.749 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 b3a2a3a10e6253698206a307516565f4608a2a42c3e13be6 0 00:10:41.749 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:41.749 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:41.749 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=b3a2a3a10e6253698206a307516565f4608a2a42c3e13be6 00:10:41.749 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:10:41.749 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:41.749 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.UPM 00:10:41.749 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.UPM 00:10:41.749 08:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.UPM 00:10:41.749 08:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:10:41.749 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:41.749 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:41.749 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:41.749 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:10:41.749 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:10:41.749 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:10:41.749 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=1db3540a19dadae1d09b6115c137581dd96b3bc6a9eb8482d4099cb3c1de6fb3 00:10:41.749 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:10:41.749 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.uM5 00:10:41.749 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 1db3540a19dadae1d09b6115c137581dd96b3bc6a9eb8482d4099cb3c1de6fb3 3 00:10:41.749 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 1db3540a19dadae1d09b6115c137581dd96b3bc6a9eb8482d4099cb3c1de6fb3 3 00:10:41.749 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:41.749 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:41.749 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=1db3540a19dadae1d09b6115c137581dd96b3bc6a9eb8482d4099cb3c1de6fb3 00:10:41.749 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:10:41.749 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:41.749 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.uM5 00:10:41.749 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.uM5 00:10:41.749 08:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.uM5 00:10:41.749 08:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:10:41.749 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:41.749 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:41.750 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:41.750 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:10:41.750 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:10:41.750 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:10:41.750 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=829dfda7c3184c07cb5cbbc0f4ab28ed 00:10:41.750 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:10:41.750 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.rlC 00:10:41.750 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 829dfda7c3184c07cb5cbbc0f4ab28ed 1 00:10:41.750 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 829dfda7c3184c07cb5cbbc0f4ab28ed 1 00:10:41.750 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:41.750 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:41.750 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=829dfda7c3184c07cb5cbbc0f4ab28ed 00:10:41.750 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:10:41.750 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:41.750 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.rlC 00:10:41.750 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.rlC 00:10:41.750 08:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.rlC 00:10:41.750 08:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:10:41.750 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:41.750 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:41.750 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:41.750 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:10:41.750 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:10:41.750 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:41.750 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=4f6619608b06ce6fb4a883802be1511a6837c32eb5eec477 00:10:41.750 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:10:41.750 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.YB3 00:10:41.750 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 4f6619608b06ce6fb4a883802be1511a6837c32eb5eec477 2 00:10:41.750 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 4f6619608b06ce6fb4a883802be1511a6837c32eb5eec477 2 00:10:41.750 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:41.750 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:41.750 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=4f6619608b06ce6fb4a883802be1511a6837c32eb5eec477 00:10:41.750 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:10:41.750 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:41.750 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.YB3 00:10:41.750 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.YB3 00:10:41.750 08:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.YB3 00:10:41.750 08:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:10:41.750 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:41.750 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:41.750 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:41.750 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:10:41.750 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:10:41.750 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:41.750 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=3d3bea9ec7e64b3b9371a7b363b731f1ed6d02b10dbcf0fe 00:10:41.750 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:10:41.750 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.rhK 00:10:41.750 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 3d3bea9ec7e64b3b9371a7b363b731f1ed6d02b10dbcf0fe 2 00:10:41.750 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 3d3bea9ec7e64b3b9371a7b363b731f1ed6d02b10dbcf0fe 2 00:10:41.750 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:41.750 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:41.750 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=3d3bea9ec7e64b3b9371a7b363b731f1ed6d02b10dbcf0fe 00:10:41.750 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:10:41.750 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:41.750 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.rhK 00:10:41.750 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.rhK 00:10:41.750 08:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.rhK 00:10:41.750 08:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:10:41.750 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:41.750 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:41.750 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:41.750 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:10:41.750 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:10:41.750 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:10:41.750 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=d9bebebff8d905f3a98d6fbcd8eb070d 00:10:41.750 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:10:41.750 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.zvK 00:10:41.750 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key d9bebebff8d905f3a98d6fbcd8eb070d 1 00:10:41.750 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 d9bebebff8d905f3a98d6fbcd8eb070d 1 00:10:41.750 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:41.750 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:41.750 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=d9bebebff8d905f3a98d6fbcd8eb070d 00:10:41.750 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:10:41.750 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:42.008 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.zvK 00:10:42.008 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.zvK 00:10:42.008 08:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.zvK 00:10:42.008 08:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:10:42.008 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:42.008 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:42.008 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:42.008 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:10:42.008 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:10:42.008 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:10:42.008 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=cdaa5e1f7886529287e56b2768e1be2896e99dcc4b1ee50c059c8f3d8ba6bae4 00:10:42.008 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:10:42.008 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.ghU 00:10:42.008 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key cdaa5e1f7886529287e56b2768e1be2896e99dcc4b1ee50c059c8f3d8ba6bae4 3 00:10:42.008 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 cdaa5e1f7886529287e56b2768e1be2896e99dcc4b1ee50c059c8f3d8ba6bae4 3 00:10:42.008 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:42.008 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:42.008 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=cdaa5e1f7886529287e56b2768e1be2896e99dcc4b1ee50c059c8f3d8ba6bae4 00:10:42.008 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:10:42.008 08:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:42.008 08:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.ghU 00:10:42.008 08:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.ghU 00:10:42.008 08:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.ghU 00:10:42.008 08:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:10:42.008 08:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 69410 00:10:42.008 08:23:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 69410 ']' 00:10:42.008 08:23:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:42.008 08:23:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:42.008 08:23:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:42.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:42.008 08:23:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:42.008 08:23:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.267 08:23:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:42.267 08:23:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:10:42.267 08:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 69442 /var/tmp/host.sock 00:10:42.267 08:23:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 69442 ']' 00:10:42.267 08:23:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:10:42.267 08:23:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:42.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:10:42.267 08:23:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:10:42.267 08:23:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:42.267 08:23:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.524 08:23:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:42.524 08:23:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:10:42.524 08:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:10:42.524 08:23:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:42.524 08:23:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.524 08:23:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:42.524 08:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:10:42.524 08:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.UPM 00:10:42.524 08:23:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:42.524 08:23:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.524 08:23:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:42.524 08:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.UPM 00:10:42.524 08:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.UPM 00:10:42.798 08:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.uM5 ]] 00:10:42.798 08:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.uM5 00:10:42.798 08:23:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:42.798 08:23:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.798 08:23:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:42.798 08:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.uM5 00:10:42.798 08:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.uM5 00:10:43.056 08:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:10:43.056 08:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.rlC 00:10:43.056 08:23:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.056 08:23:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:43.056 08:23:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.056 08:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.rlC 00:10:43.056 08:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.rlC 00:10:43.315 08:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.YB3 ]] 00:10:43.315 08:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.YB3 00:10:43.315 08:23:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.315 08:23:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:43.315 08:23:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.315 08:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.YB3 00:10:43.315 08:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.YB3 00:10:43.574 08:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:10:43.574 08:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.rhK 00:10:43.574 08:23:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.574 08:23:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:43.574 08:23:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.574 08:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.rhK 00:10:43.574 08:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.rhK 00:10:43.832 08:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.zvK ]] 00:10:43.832 08:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.zvK 00:10:43.832 08:23:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.832 08:23:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:43.832 08:23:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.832 08:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.zvK 00:10:43.833 08:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.zvK 00:10:44.091 08:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:10:44.091 08:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.ghU 00:10:44.091 08:23:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:44.091 08:23:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.091 08:23:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:44.091 08:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.ghU 00:10:44.091 08:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.ghU 00:10:44.349 08:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:10:44.349 08:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:10:44.349 08:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:44.349 08:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:44.349 08:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:44.349 08:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:44.607 08:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:10:44.607 08:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:44.607 08:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:44.607 08:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:44.607 08:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:44.607 08:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:44.607 08:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:44.607 08:23:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:44.607 08:23:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.607 08:23:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:44.607 08:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:44.607 08:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:44.866 00:10:44.866 08:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:44.866 08:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:44.866 08:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:45.125 08:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:45.125 08:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:45.125 08:23:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.125 08:23:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:45.383 08:23:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:45.383 08:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:45.383 { 00:10:45.383 "cntlid": 1, 00:10:45.383 "qid": 0, 00:10:45.383 "state": "enabled", 00:10:45.383 "thread": "nvmf_tgt_poll_group_000", 00:10:45.383 "listen_address": { 00:10:45.384 "trtype": "TCP", 00:10:45.384 "adrfam": "IPv4", 00:10:45.384 "traddr": "10.0.0.2", 00:10:45.384 "trsvcid": "4420" 00:10:45.384 }, 00:10:45.384 "peer_address": { 00:10:45.384 "trtype": "TCP", 00:10:45.384 "adrfam": "IPv4", 00:10:45.384 "traddr": "10.0.0.1", 00:10:45.384 "trsvcid": "54404" 00:10:45.384 }, 00:10:45.384 "auth": { 00:10:45.384 "state": "completed", 00:10:45.384 "digest": "sha256", 00:10:45.384 "dhgroup": "null" 00:10:45.384 } 00:10:45.384 } 00:10:45.384 ]' 00:10:45.384 08:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:45.384 08:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:45.384 08:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:45.384 08:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:45.384 08:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:45.384 08:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:45.384 08:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:45.384 08:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:45.642 08:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --hostid cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-secret DHHC-1:00:YjNhMmEzYTEwZTYyNTM2OTgyMDZhMzA3NTE2NTY1ZjQ2MDhhMmE0MmMzZTEzYmU2Afx2NA==: --dhchap-ctrl-secret DHHC-1:03:MWRiMzU0MGExOWRhZGFlMWQwOWI2MTE1YzEzNzU4MWRkOTZiM2JjNmE5ZWI4NDgyZDQwOTljYjNjMWRlNmZiMw6pYio=: 00:10:50.912 08:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:50.912 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:50.912 08:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:10:50.912 08:23:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:50.912 08:23:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.912 08:23:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:50.912 08:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:50.912 08:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:50.912 08:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:50.912 08:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:10:50.912 08:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:50.912 08:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:50.912 08:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:50.912 08:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:50.912 08:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:50.912 08:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:50.912 08:23:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:50.912 08:23:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.912 08:23:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:50.912 08:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:50.912 08:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:50.912 00:10:50.912 08:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:50.912 08:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:50.912 08:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:51.180 08:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:51.180 08:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:51.180 08:23:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:51.180 08:23:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:51.180 08:23:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:51.180 08:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:51.180 { 00:10:51.180 "cntlid": 3, 00:10:51.180 "qid": 0, 00:10:51.180 "state": "enabled", 00:10:51.180 "thread": "nvmf_tgt_poll_group_000", 00:10:51.180 "listen_address": { 00:10:51.180 "trtype": "TCP", 00:10:51.180 "adrfam": "IPv4", 00:10:51.180 "traddr": "10.0.0.2", 00:10:51.180 "trsvcid": "4420" 00:10:51.180 }, 00:10:51.180 "peer_address": { 00:10:51.180 "trtype": "TCP", 00:10:51.180 "adrfam": "IPv4", 00:10:51.180 "traddr": "10.0.0.1", 00:10:51.180 "trsvcid": "54424" 00:10:51.180 }, 00:10:51.180 "auth": { 00:10:51.180 "state": "completed", 00:10:51.180 "digest": "sha256", 00:10:51.180 "dhgroup": "null" 00:10:51.180 } 00:10:51.180 } 00:10:51.180 ]' 00:10:51.180 08:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:51.180 08:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:51.180 08:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:51.180 08:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:51.180 08:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:51.443 08:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:51.443 08:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:51.443 08:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:51.443 08:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --hostid cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-secret DHHC-1:01:ODI5ZGZkYTdjMzE4NGMwN2NiNWNiYmMwZjRhYjI4ZWSfzRk6: --dhchap-ctrl-secret DHHC-1:02:NGY2NjE5NjA4YjA2Y2U2ZmI0YTg4MzgwMmJlMTUxMWE2ODM3YzMyZWI1ZWVjNDc3IVHbOw==: 00:10:52.375 08:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:52.375 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:52.375 08:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:10:52.375 08:23:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.375 08:23:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:52.375 08:23:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.375 08:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:52.375 08:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:52.375 08:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:52.632 08:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:10:52.632 08:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:52.632 08:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:52.632 08:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:52.632 08:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:52.632 08:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:52.632 08:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:52.632 08:23:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.632 08:23:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:52.632 08:23:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.632 08:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:52.632 08:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:52.890 00:10:52.890 08:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:52.890 08:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:52.890 08:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:53.148 08:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:53.148 08:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:53.148 08:23:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:53.148 08:23:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:53.148 08:23:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:53.148 08:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:53.148 { 00:10:53.148 "cntlid": 5, 00:10:53.148 "qid": 0, 00:10:53.148 "state": "enabled", 00:10:53.148 "thread": "nvmf_tgt_poll_group_000", 00:10:53.148 "listen_address": { 00:10:53.148 "trtype": "TCP", 00:10:53.148 "adrfam": "IPv4", 00:10:53.148 "traddr": "10.0.0.2", 00:10:53.148 "trsvcid": "4420" 00:10:53.148 }, 00:10:53.148 "peer_address": { 00:10:53.148 "trtype": "TCP", 00:10:53.148 "adrfam": "IPv4", 00:10:53.148 "traddr": "10.0.0.1", 00:10:53.148 "trsvcid": "54442" 00:10:53.148 }, 00:10:53.148 "auth": { 00:10:53.148 "state": "completed", 00:10:53.148 "digest": "sha256", 00:10:53.148 "dhgroup": "null" 00:10:53.148 } 00:10:53.148 } 00:10:53.148 ]' 00:10:53.148 08:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:53.148 08:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:53.148 08:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:53.405 08:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:53.405 08:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:53.405 08:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:53.405 08:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:53.405 08:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:53.663 08:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --hostid cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-secret DHHC-1:02:M2QzYmVhOWVjN2U2NGIzYjkzNzFhN2IzNjNiNzMxZjFlZDZkMDJiMTBkYmNmMGZlM8IJBw==: --dhchap-ctrl-secret DHHC-1:01:ZDliZWJlYmZmOGQ5MDVmM2E5OGQ2ZmJjZDhlYjA3MGT8TygM: 00:10:54.229 08:23:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:54.229 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:54.229 08:23:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:10:54.229 08:23:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:54.229 08:23:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.229 08:23:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:54.229 08:23:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:54.229 08:23:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:54.229 08:23:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:54.487 08:23:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:10:54.487 08:23:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:54.487 08:23:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:54.487 08:23:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:54.487 08:23:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:54.487 08:23:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:54.487 08:23:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-key key3 00:10:54.487 08:23:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:54.487 08:23:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.487 08:23:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:54.487 08:23:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:54.487 08:23:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:54.744 00:10:54.744 08:23:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:54.744 08:23:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:54.744 08:23:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:55.002 08:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:55.002 08:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:55.002 08:23:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:55.002 08:23:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.002 08:23:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:55.002 08:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:55.002 { 00:10:55.002 "cntlid": 7, 00:10:55.002 "qid": 0, 00:10:55.002 "state": "enabled", 00:10:55.002 "thread": "nvmf_tgt_poll_group_000", 00:10:55.002 "listen_address": { 00:10:55.002 "trtype": "TCP", 00:10:55.002 "adrfam": "IPv4", 00:10:55.002 "traddr": "10.0.0.2", 00:10:55.002 "trsvcid": "4420" 00:10:55.002 }, 00:10:55.002 "peer_address": { 00:10:55.002 "trtype": "TCP", 00:10:55.002 "adrfam": "IPv4", 00:10:55.002 "traddr": "10.0.0.1", 00:10:55.002 "trsvcid": "59662" 00:10:55.002 }, 00:10:55.002 "auth": { 00:10:55.002 "state": "completed", 00:10:55.003 "digest": "sha256", 00:10:55.003 "dhgroup": "null" 00:10:55.003 } 00:10:55.003 } 00:10:55.003 ]' 00:10:55.003 08:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:55.260 08:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:55.260 08:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:55.260 08:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:55.260 08:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:55.260 08:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:55.260 08:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:55.260 08:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:55.517 08:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --hostid cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-secret DHHC-1:03:Y2RhYTVlMWY3ODg2NTI5Mjg3ZTU2YjI3NjhlMWJlMjg5NmU5OWRjYzRiMWVlNTBjMDU5YzhmM2Q4YmE2YmFlNKIyA44=: 00:10:56.083 08:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:56.340 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:56.340 08:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:10:56.340 08:23:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.340 08:23:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.340 08:23:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.340 08:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:56.340 08:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:56.340 08:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:56.340 08:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:56.598 08:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:10:56.598 08:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:56.598 08:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:56.598 08:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:10:56.598 08:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:56.598 08:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:56.598 08:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:56.598 08:23:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.598 08:23:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.598 08:23:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.598 08:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:56.598 08:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:56.856 00:10:56.856 08:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:56.856 08:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:56.856 08:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:57.114 08:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:57.114 08:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:57.114 08:23:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:57.114 08:23:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.114 08:23:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:57.114 08:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:57.114 { 00:10:57.114 "cntlid": 9, 00:10:57.114 "qid": 0, 00:10:57.114 "state": "enabled", 00:10:57.114 "thread": "nvmf_tgt_poll_group_000", 00:10:57.114 "listen_address": { 00:10:57.114 "trtype": "TCP", 00:10:57.114 "adrfam": "IPv4", 00:10:57.114 "traddr": "10.0.0.2", 00:10:57.114 "trsvcid": "4420" 00:10:57.114 }, 00:10:57.114 "peer_address": { 00:10:57.114 "trtype": "TCP", 00:10:57.114 "adrfam": "IPv4", 00:10:57.114 "traddr": "10.0.0.1", 00:10:57.114 "trsvcid": "59698" 00:10:57.114 }, 00:10:57.114 "auth": { 00:10:57.114 "state": "completed", 00:10:57.114 "digest": "sha256", 00:10:57.114 "dhgroup": "ffdhe2048" 00:10:57.114 } 00:10:57.114 } 00:10:57.114 ]' 00:10:57.114 08:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:57.114 08:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:57.114 08:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:57.372 08:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:57.372 08:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:57.372 08:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:57.372 08:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:57.372 08:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:57.630 08:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --hostid cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-secret DHHC-1:00:YjNhMmEzYTEwZTYyNTM2OTgyMDZhMzA3NTE2NTY1ZjQ2MDhhMmE0MmMzZTEzYmU2Afx2NA==: --dhchap-ctrl-secret DHHC-1:03:MWRiMzU0MGExOWRhZGFlMWQwOWI2MTE1YzEzNzU4MWRkOTZiM2JjNmE5ZWI4NDgyZDQwOTljYjNjMWRlNmZiMw6pYio=: 00:10:58.197 08:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:58.197 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:58.197 08:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:10:58.197 08:23:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:58.197 08:23:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.197 08:23:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:58.197 08:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:58.197 08:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:58.197 08:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:58.456 08:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:10:58.456 08:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:58.456 08:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:58.456 08:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:10:58.456 08:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:58.456 08:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:58.456 08:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:58.456 08:23:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:58.456 08:23:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.456 08:23:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:58.456 08:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:58.456 08:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:58.715 00:10:58.715 08:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:58.715 08:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:58.715 08:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:58.973 08:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:58.973 08:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:58.973 08:23:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:58.973 08:23:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.973 08:23:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:58.973 08:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:58.973 { 00:10:58.973 "cntlid": 11, 00:10:58.973 "qid": 0, 00:10:58.973 "state": "enabled", 00:10:58.973 "thread": "nvmf_tgt_poll_group_000", 00:10:58.973 "listen_address": { 00:10:58.973 "trtype": "TCP", 00:10:58.973 "adrfam": "IPv4", 00:10:58.973 "traddr": "10.0.0.2", 00:10:58.973 "trsvcid": "4420" 00:10:58.973 }, 00:10:58.973 "peer_address": { 00:10:58.973 "trtype": "TCP", 00:10:58.973 "adrfam": "IPv4", 00:10:58.973 "traddr": "10.0.0.1", 00:10:58.973 "trsvcid": "59724" 00:10:58.973 }, 00:10:58.973 "auth": { 00:10:58.973 "state": "completed", 00:10:58.973 "digest": "sha256", 00:10:58.973 "dhgroup": "ffdhe2048" 00:10:58.973 } 00:10:58.973 } 00:10:58.973 ]' 00:10:58.973 08:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:59.230 08:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:59.230 08:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:59.231 08:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:59.231 08:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:59.231 08:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:59.231 08:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:59.231 08:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:59.489 08:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --hostid cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-secret DHHC-1:01:ODI5ZGZkYTdjMzE4NGMwN2NiNWNiYmMwZjRhYjI4ZWSfzRk6: --dhchap-ctrl-secret DHHC-1:02:NGY2NjE5NjA4YjA2Y2U2ZmI0YTg4MzgwMmJlMTUxMWE2ODM3YzMyZWI1ZWVjNDc3IVHbOw==: 00:11:00.424 08:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:00.424 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:00.424 08:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:11:00.424 08:23:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.424 08:23:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.424 08:23:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.424 08:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:00.424 08:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:00.424 08:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:00.425 08:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:11:00.425 08:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:00.425 08:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:00.425 08:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:00.425 08:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:00.425 08:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:00.425 08:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:00.425 08:23:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.425 08:23:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.425 08:23:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.425 08:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:00.425 08:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:00.993 00:11:00.993 08:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:00.993 08:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:00.993 08:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:01.258 08:23:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:01.258 08:23:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:01.258 08:23:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.258 08:23:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.258 08:23:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:01.258 08:23:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:01.258 { 00:11:01.258 "cntlid": 13, 00:11:01.258 "qid": 0, 00:11:01.258 "state": "enabled", 00:11:01.258 "thread": "nvmf_tgt_poll_group_000", 00:11:01.258 "listen_address": { 00:11:01.258 "trtype": "TCP", 00:11:01.258 "adrfam": "IPv4", 00:11:01.258 "traddr": "10.0.0.2", 00:11:01.258 "trsvcid": "4420" 00:11:01.258 }, 00:11:01.258 "peer_address": { 00:11:01.258 "trtype": "TCP", 00:11:01.258 "adrfam": "IPv4", 00:11:01.258 "traddr": "10.0.0.1", 00:11:01.258 "trsvcid": "59768" 00:11:01.258 }, 00:11:01.258 "auth": { 00:11:01.258 "state": "completed", 00:11:01.258 "digest": "sha256", 00:11:01.258 "dhgroup": "ffdhe2048" 00:11:01.258 } 00:11:01.258 } 00:11:01.258 ]' 00:11:01.258 08:23:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:01.258 08:23:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:01.258 08:23:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:01.258 08:23:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:01.258 08:23:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:01.258 08:23:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:01.258 08:23:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:01.258 08:23:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:01.519 08:23:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --hostid cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-secret DHHC-1:02:M2QzYmVhOWVjN2U2NGIzYjkzNzFhN2IzNjNiNzMxZjFlZDZkMDJiMTBkYmNmMGZlM8IJBw==: --dhchap-ctrl-secret DHHC-1:01:ZDliZWJlYmZmOGQ5MDVmM2E5OGQ2ZmJjZDhlYjA3MGT8TygM: 00:11:02.457 08:23:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:02.457 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:02.457 08:23:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:11:02.457 08:23:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.457 08:23:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.457 08:23:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.457 08:23:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:02.457 08:23:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:02.457 08:23:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:02.457 08:23:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:11:02.457 08:23:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:02.457 08:23:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:02.457 08:23:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:02.457 08:23:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:02.457 08:23:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:02.457 08:23:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-key key3 00:11:02.457 08:23:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.457 08:23:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.457 08:23:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.457 08:23:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:02.457 08:23:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:02.716 00:11:02.716 08:23:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:02.716 08:23:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:02.716 08:23:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:03.283 08:23:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:03.283 08:23:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:03.283 08:23:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:03.283 08:23:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.283 08:23:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:03.283 08:23:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:03.283 { 00:11:03.283 "cntlid": 15, 00:11:03.283 "qid": 0, 00:11:03.283 "state": "enabled", 00:11:03.283 "thread": "nvmf_tgt_poll_group_000", 00:11:03.283 "listen_address": { 00:11:03.283 "trtype": "TCP", 00:11:03.283 "adrfam": "IPv4", 00:11:03.283 "traddr": "10.0.0.2", 00:11:03.283 "trsvcid": "4420" 00:11:03.283 }, 00:11:03.283 "peer_address": { 00:11:03.283 "trtype": "TCP", 00:11:03.283 "adrfam": "IPv4", 00:11:03.283 "traddr": "10.0.0.1", 00:11:03.283 "trsvcid": "59794" 00:11:03.283 }, 00:11:03.283 "auth": { 00:11:03.283 "state": "completed", 00:11:03.283 "digest": "sha256", 00:11:03.283 "dhgroup": "ffdhe2048" 00:11:03.283 } 00:11:03.283 } 00:11:03.283 ]' 00:11:03.283 08:23:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:03.283 08:23:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:03.283 08:23:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:03.283 08:23:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:03.283 08:23:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:03.283 08:23:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:03.283 08:23:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:03.283 08:23:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:03.542 08:23:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --hostid cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-secret DHHC-1:03:Y2RhYTVlMWY3ODg2NTI5Mjg3ZTU2YjI3NjhlMWJlMjg5NmU5OWRjYzRiMWVlNTBjMDU5YzhmM2Q4YmE2YmFlNKIyA44=: 00:11:04.109 08:23:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:04.109 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:04.109 08:23:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:11:04.109 08:23:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.109 08:23:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.109 08:23:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.109 08:23:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:04.109 08:23:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:04.109 08:23:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:04.109 08:23:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:04.367 08:23:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:11:04.367 08:23:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:04.367 08:23:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:04.367 08:23:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:04.367 08:23:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:04.367 08:23:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:04.367 08:23:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:04.367 08:23:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.367 08:23:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.367 08:23:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.367 08:23:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:04.367 08:23:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:04.934 00:11:04.934 08:23:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:04.934 08:23:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:04.935 08:23:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:05.194 08:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:05.194 08:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:05.194 08:23:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.194 08:23:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.194 08:23:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.194 08:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:05.194 { 00:11:05.194 "cntlid": 17, 00:11:05.194 "qid": 0, 00:11:05.194 "state": "enabled", 00:11:05.194 "thread": "nvmf_tgt_poll_group_000", 00:11:05.194 "listen_address": { 00:11:05.194 "trtype": "TCP", 00:11:05.194 "adrfam": "IPv4", 00:11:05.194 "traddr": "10.0.0.2", 00:11:05.194 "trsvcid": "4420" 00:11:05.194 }, 00:11:05.194 "peer_address": { 00:11:05.194 "trtype": "TCP", 00:11:05.194 "adrfam": "IPv4", 00:11:05.194 "traddr": "10.0.0.1", 00:11:05.194 "trsvcid": "33450" 00:11:05.194 }, 00:11:05.194 "auth": { 00:11:05.194 "state": "completed", 00:11:05.194 "digest": "sha256", 00:11:05.194 "dhgroup": "ffdhe3072" 00:11:05.194 } 00:11:05.194 } 00:11:05.194 ]' 00:11:05.194 08:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:05.194 08:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:05.194 08:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:05.194 08:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:05.194 08:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:05.194 08:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:05.194 08:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:05.194 08:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:05.452 08:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --hostid cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-secret DHHC-1:00:YjNhMmEzYTEwZTYyNTM2OTgyMDZhMzA3NTE2NTY1ZjQ2MDhhMmE0MmMzZTEzYmU2Afx2NA==: --dhchap-ctrl-secret DHHC-1:03:MWRiMzU0MGExOWRhZGFlMWQwOWI2MTE1YzEzNzU4MWRkOTZiM2JjNmE5ZWI4NDgyZDQwOTljYjNjMWRlNmZiMw6pYio=: 00:11:06.387 08:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:06.387 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:06.387 08:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:11:06.387 08:23:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.387 08:23:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.387 08:23:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.387 08:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:06.387 08:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:06.387 08:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:06.387 08:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:11:06.387 08:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:06.387 08:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:06.387 08:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:06.387 08:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:06.387 08:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:06.387 08:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:06.387 08:23:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.387 08:23:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.387 08:23:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.387 08:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:06.387 08:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:06.646 00:11:06.905 08:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:06.905 08:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:06.905 08:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:06.905 08:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:06.905 08:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:06.905 08:23:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.905 08:23:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.164 08:23:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.164 08:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:07.164 { 00:11:07.164 "cntlid": 19, 00:11:07.164 "qid": 0, 00:11:07.164 "state": "enabled", 00:11:07.164 "thread": "nvmf_tgt_poll_group_000", 00:11:07.164 "listen_address": { 00:11:07.164 "trtype": "TCP", 00:11:07.164 "adrfam": "IPv4", 00:11:07.164 "traddr": "10.0.0.2", 00:11:07.164 "trsvcid": "4420" 00:11:07.164 }, 00:11:07.164 "peer_address": { 00:11:07.164 "trtype": "TCP", 00:11:07.164 "adrfam": "IPv4", 00:11:07.164 "traddr": "10.0.0.1", 00:11:07.164 "trsvcid": "33476" 00:11:07.164 }, 00:11:07.164 "auth": { 00:11:07.164 "state": "completed", 00:11:07.164 "digest": "sha256", 00:11:07.164 "dhgroup": "ffdhe3072" 00:11:07.164 } 00:11:07.164 } 00:11:07.164 ]' 00:11:07.164 08:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:07.164 08:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:07.164 08:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:07.164 08:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:07.164 08:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:07.164 08:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:07.164 08:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:07.164 08:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:07.423 08:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --hostid cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-secret DHHC-1:01:ODI5ZGZkYTdjMzE4NGMwN2NiNWNiYmMwZjRhYjI4ZWSfzRk6: --dhchap-ctrl-secret DHHC-1:02:NGY2NjE5NjA4YjA2Y2U2ZmI0YTg4MzgwMmJlMTUxMWE2ODM3YzMyZWI1ZWVjNDc3IVHbOw==: 00:11:07.990 08:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:07.991 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:07.991 08:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:11:07.991 08:24:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.991 08:24:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.991 08:24:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.991 08:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:07.991 08:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:07.991 08:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:08.250 08:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:11:08.250 08:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:08.250 08:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:08.250 08:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:08.250 08:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:08.250 08:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:08.250 08:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:08.250 08:24:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:08.250 08:24:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.250 08:24:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:08.250 08:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:08.250 08:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:08.508 00:11:08.767 08:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:08.767 08:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:08.767 08:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:08.767 08:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:08.767 08:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:08.767 08:24:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:08.767 08:24:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.024 08:24:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:09.024 08:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:09.024 { 00:11:09.024 "cntlid": 21, 00:11:09.024 "qid": 0, 00:11:09.024 "state": "enabled", 00:11:09.024 "thread": "nvmf_tgt_poll_group_000", 00:11:09.024 "listen_address": { 00:11:09.024 "trtype": "TCP", 00:11:09.024 "adrfam": "IPv4", 00:11:09.024 "traddr": "10.0.0.2", 00:11:09.024 "trsvcid": "4420" 00:11:09.024 }, 00:11:09.024 "peer_address": { 00:11:09.024 "trtype": "TCP", 00:11:09.024 "adrfam": "IPv4", 00:11:09.024 "traddr": "10.0.0.1", 00:11:09.024 "trsvcid": "33504" 00:11:09.024 }, 00:11:09.024 "auth": { 00:11:09.024 "state": "completed", 00:11:09.024 "digest": "sha256", 00:11:09.024 "dhgroup": "ffdhe3072" 00:11:09.024 } 00:11:09.024 } 00:11:09.024 ]' 00:11:09.024 08:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:09.024 08:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:09.024 08:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:09.024 08:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:09.024 08:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:09.024 08:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:09.024 08:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:09.024 08:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:09.282 08:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --hostid cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-secret DHHC-1:02:M2QzYmVhOWVjN2U2NGIzYjkzNzFhN2IzNjNiNzMxZjFlZDZkMDJiMTBkYmNmMGZlM8IJBw==: --dhchap-ctrl-secret DHHC-1:01:ZDliZWJlYmZmOGQ5MDVmM2E5OGQ2ZmJjZDhlYjA3MGT8TygM: 00:11:09.848 08:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:09.848 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:09.848 08:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:11:09.848 08:24:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:09.848 08:24:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.848 08:24:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:09.848 08:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:09.848 08:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:09.848 08:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:10.107 08:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:11:10.107 08:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:10.107 08:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:10.107 08:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:10.107 08:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:10.107 08:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:10.107 08:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-key key3 00:11:10.107 08:24:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:10.107 08:24:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.107 08:24:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:10.107 08:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:10.107 08:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:10.674 00:11:10.674 08:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:10.674 08:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:10.674 08:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:10.935 08:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:10.935 08:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:10.935 08:24:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:10.935 08:24:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.935 08:24:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:10.935 08:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:10.935 { 00:11:10.935 "cntlid": 23, 00:11:10.935 "qid": 0, 00:11:10.935 "state": "enabled", 00:11:10.935 "thread": "nvmf_tgt_poll_group_000", 00:11:10.935 "listen_address": { 00:11:10.935 "trtype": "TCP", 00:11:10.935 "adrfam": "IPv4", 00:11:10.935 "traddr": "10.0.0.2", 00:11:10.935 "trsvcid": "4420" 00:11:10.935 }, 00:11:10.935 "peer_address": { 00:11:10.935 "trtype": "TCP", 00:11:10.935 "adrfam": "IPv4", 00:11:10.935 "traddr": "10.0.0.1", 00:11:10.935 "trsvcid": "33538" 00:11:10.935 }, 00:11:10.935 "auth": { 00:11:10.935 "state": "completed", 00:11:10.935 "digest": "sha256", 00:11:10.935 "dhgroup": "ffdhe3072" 00:11:10.935 } 00:11:10.935 } 00:11:10.935 ]' 00:11:10.935 08:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:10.935 08:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:10.935 08:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:10.935 08:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:10.935 08:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:10.935 08:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:10.935 08:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:10.935 08:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:11.194 08:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --hostid cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-secret DHHC-1:03:Y2RhYTVlMWY3ODg2NTI5Mjg3ZTU2YjI3NjhlMWJlMjg5NmU5OWRjYzRiMWVlNTBjMDU5YzhmM2Q4YmE2YmFlNKIyA44=: 00:11:12.127 08:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:12.127 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:12.128 08:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:11:12.128 08:24:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:12.128 08:24:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.128 08:24:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:12.128 08:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:12.128 08:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:12.128 08:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:12.128 08:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:12.485 08:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:11:12.485 08:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:12.485 08:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:12.485 08:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:12.485 08:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:12.485 08:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:12.485 08:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:12.485 08:24:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:12.485 08:24:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.485 08:24:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:12.485 08:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:12.485 08:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:12.744 00:11:12.744 08:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:12.744 08:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:12.744 08:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:13.003 08:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:13.003 08:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:13.004 08:24:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.004 08:24:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.004 08:24:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:13.004 08:24:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:13.004 { 00:11:13.004 "cntlid": 25, 00:11:13.004 "qid": 0, 00:11:13.004 "state": "enabled", 00:11:13.004 "thread": "nvmf_tgt_poll_group_000", 00:11:13.004 "listen_address": { 00:11:13.004 "trtype": "TCP", 00:11:13.004 "adrfam": "IPv4", 00:11:13.004 "traddr": "10.0.0.2", 00:11:13.004 "trsvcid": "4420" 00:11:13.004 }, 00:11:13.004 "peer_address": { 00:11:13.004 "trtype": "TCP", 00:11:13.004 "adrfam": "IPv4", 00:11:13.004 "traddr": "10.0.0.1", 00:11:13.004 "trsvcid": "33568" 00:11:13.004 }, 00:11:13.004 "auth": { 00:11:13.004 "state": "completed", 00:11:13.004 "digest": "sha256", 00:11:13.004 "dhgroup": "ffdhe4096" 00:11:13.004 } 00:11:13.004 } 00:11:13.004 ]' 00:11:13.004 08:24:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:13.004 08:24:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:13.004 08:24:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:13.004 08:24:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:13.004 08:24:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:13.004 08:24:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:13.004 08:24:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:13.004 08:24:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:13.572 08:24:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --hostid cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-secret DHHC-1:00:YjNhMmEzYTEwZTYyNTM2OTgyMDZhMzA3NTE2NTY1ZjQ2MDhhMmE0MmMzZTEzYmU2Afx2NA==: --dhchap-ctrl-secret DHHC-1:03:MWRiMzU0MGExOWRhZGFlMWQwOWI2MTE1YzEzNzU4MWRkOTZiM2JjNmE5ZWI4NDgyZDQwOTljYjNjMWRlNmZiMw6pYio=: 00:11:14.140 08:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:14.140 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:14.140 08:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:11:14.140 08:24:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:14.140 08:24:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.140 08:24:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:14.140 08:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:14.140 08:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:14.140 08:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:14.399 08:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:11:14.399 08:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:14.399 08:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:14.399 08:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:14.399 08:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:14.399 08:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:14.399 08:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:14.399 08:24:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:14.399 08:24:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.399 08:24:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:14.399 08:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:14.399 08:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:14.659 00:11:14.659 08:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:14.659 08:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:14.659 08:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:14.918 08:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:14.918 08:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:14.918 08:24:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:14.918 08:24:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.918 08:24:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:14.918 08:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:14.918 { 00:11:14.918 "cntlid": 27, 00:11:14.918 "qid": 0, 00:11:14.918 "state": "enabled", 00:11:14.918 "thread": "nvmf_tgt_poll_group_000", 00:11:14.918 "listen_address": { 00:11:14.918 "trtype": "TCP", 00:11:14.918 "adrfam": "IPv4", 00:11:14.918 "traddr": "10.0.0.2", 00:11:14.918 "trsvcid": "4420" 00:11:14.918 }, 00:11:14.918 "peer_address": { 00:11:14.918 "trtype": "TCP", 00:11:14.918 "adrfam": "IPv4", 00:11:14.918 "traddr": "10.0.0.1", 00:11:14.918 "trsvcid": "37448" 00:11:14.918 }, 00:11:14.918 "auth": { 00:11:14.918 "state": "completed", 00:11:14.918 "digest": "sha256", 00:11:14.918 "dhgroup": "ffdhe4096" 00:11:14.918 } 00:11:14.918 } 00:11:14.918 ]' 00:11:14.918 08:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:15.177 08:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:15.177 08:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:15.177 08:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:15.177 08:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:15.177 08:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:15.177 08:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:15.177 08:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:15.435 08:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --hostid cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-secret DHHC-1:01:ODI5ZGZkYTdjMzE4NGMwN2NiNWNiYmMwZjRhYjI4ZWSfzRk6: --dhchap-ctrl-secret DHHC-1:02:NGY2NjE5NjA4YjA2Y2U2ZmI0YTg4MzgwMmJlMTUxMWE2ODM3YzMyZWI1ZWVjNDc3IVHbOw==: 00:11:16.372 08:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:16.372 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:16.372 08:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:11:16.372 08:24:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:16.372 08:24:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.372 08:24:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:16.372 08:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:16.372 08:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:16.372 08:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:16.372 08:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:11:16.372 08:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:16.372 08:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:16.372 08:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:16.372 08:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:16.372 08:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:16.372 08:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:16.372 08:24:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:16.372 08:24:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.372 08:24:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:16.372 08:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:16.372 08:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:16.963 00:11:16.963 08:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:16.963 08:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:16.963 08:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:17.220 08:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:17.220 08:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:17.220 08:24:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.220 08:24:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.220 08:24:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.220 08:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:17.220 { 00:11:17.220 "cntlid": 29, 00:11:17.220 "qid": 0, 00:11:17.220 "state": "enabled", 00:11:17.220 "thread": "nvmf_tgt_poll_group_000", 00:11:17.220 "listen_address": { 00:11:17.220 "trtype": "TCP", 00:11:17.220 "adrfam": "IPv4", 00:11:17.220 "traddr": "10.0.0.2", 00:11:17.220 "trsvcid": "4420" 00:11:17.220 }, 00:11:17.220 "peer_address": { 00:11:17.220 "trtype": "TCP", 00:11:17.220 "adrfam": "IPv4", 00:11:17.220 "traddr": "10.0.0.1", 00:11:17.220 "trsvcid": "37474" 00:11:17.220 }, 00:11:17.220 "auth": { 00:11:17.220 "state": "completed", 00:11:17.220 "digest": "sha256", 00:11:17.220 "dhgroup": "ffdhe4096" 00:11:17.220 } 00:11:17.220 } 00:11:17.220 ]' 00:11:17.220 08:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:17.220 08:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:17.220 08:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:17.220 08:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:17.220 08:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:17.221 08:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:17.221 08:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:17.221 08:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:17.479 08:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --hostid cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-secret DHHC-1:02:M2QzYmVhOWVjN2U2NGIzYjkzNzFhN2IzNjNiNzMxZjFlZDZkMDJiMTBkYmNmMGZlM8IJBw==: --dhchap-ctrl-secret DHHC-1:01:ZDliZWJlYmZmOGQ5MDVmM2E5OGQ2ZmJjZDhlYjA3MGT8TygM: 00:11:18.045 08:24:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:18.045 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:18.045 08:24:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:11:18.045 08:24:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:18.045 08:24:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.045 08:24:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:18.045 08:24:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:18.045 08:24:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:18.045 08:24:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:18.303 08:24:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:11:18.303 08:24:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:18.303 08:24:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:18.303 08:24:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:18.303 08:24:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:18.303 08:24:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:18.303 08:24:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-key key3 00:11:18.303 08:24:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:18.303 08:24:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.303 08:24:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:18.303 08:24:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:18.303 08:24:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:18.870 00:11:18.870 08:24:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:18.870 08:24:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:18.870 08:24:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:19.128 08:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:19.128 08:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:19.128 08:24:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:19.128 08:24:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.128 08:24:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:19.128 08:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:19.128 { 00:11:19.128 "cntlid": 31, 00:11:19.128 "qid": 0, 00:11:19.128 "state": "enabled", 00:11:19.128 "thread": "nvmf_tgt_poll_group_000", 00:11:19.128 "listen_address": { 00:11:19.128 "trtype": "TCP", 00:11:19.128 "adrfam": "IPv4", 00:11:19.128 "traddr": "10.0.0.2", 00:11:19.128 "trsvcid": "4420" 00:11:19.128 }, 00:11:19.128 "peer_address": { 00:11:19.128 "trtype": "TCP", 00:11:19.128 "adrfam": "IPv4", 00:11:19.128 "traddr": "10.0.0.1", 00:11:19.128 "trsvcid": "37488" 00:11:19.128 }, 00:11:19.128 "auth": { 00:11:19.128 "state": "completed", 00:11:19.128 "digest": "sha256", 00:11:19.128 "dhgroup": "ffdhe4096" 00:11:19.128 } 00:11:19.128 } 00:11:19.128 ]' 00:11:19.128 08:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:19.128 08:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:19.128 08:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:19.128 08:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:19.128 08:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:19.128 08:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:19.128 08:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:19.128 08:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:19.695 08:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --hostid cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-secret DHHC-1:03:Y2RhYTVlMWY3ODg2NTI5Mjg3ZTU2YjI3NjhlMWJlMjg5NmU5OWRjYzRiMWVlNTBjMDU5YzhmM2Q4YmE2YmFlNKIyA44=: 00:11:20.261 08:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:20.261 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:20.261 08:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:11:20.261 08:24:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:20.261 08:24:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.261 08:24:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:20.261 08:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:20.261 08:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:20.261 08:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:20.261 08:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:20.519 08:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:11:20.519 08:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:20.519 08:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:20.519 08:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:20.519 08:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:20.519 08:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:20.519 08:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:20.519 08:24:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:20.519 08:24:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.519 08:24:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:20.519 08:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:20.519 08:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:21.086 00:11:21.086 08:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:21.086 08:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:21.086 08:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:21.344 08:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:21.344 08:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:21.344 08:24:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:21.344 08:24:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.344 08:24:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:21.344 08:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:21.344 { 00:11:21.344 "cntlid": 33, 00:11:21.344 "qid": 0, 00:11:21.344 "state": "enabled", 00:11:21.344 "thread": "nvmf_tgt_poll_group_000", 00:11:21.344 "listen_address": { 00:11:21.344 "trtype": "TCP", 00:11:21.344 "adrfam": "IPv4", 00:11:21.344 "traddr": "10.0.0.2", 00:11:21.344 "trsvcid": "4420" 00:11:21.344 }, 00:11:21.344 "peer_address": { 00:11:21.344 "trtype": "TCP", 00:11:21.344 "adrfam": "IPv4", 00:11:21.344 "traddr": "10.0.0.1", 00:11:21.344 "trsvcid": "37510" 00:11:21.344 }, 00:11:21.344 "auth": { 00:11:21.344 "state": "completed", 00:11:21.344 "digest": "sha256", 00:11:21.344 "dhgroup": "ffdhe6144" 00:11:21.344 } 00:11:21.344 } 00:11:21.344 ]' 00:11:21.344 08:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:21.344 08:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:21.344 08:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:21.344 08:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:21.344 08:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:21.344 08:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:21.344 08:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:21.344 08:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:21.602 08:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --hostid cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-secret DHHC-1:00:YjNhMmEzYTEwZTYyNTM2OTgyMDZhMzA3NTE2NTY1ZjQ2MDhhMmE0MmMzZTEzYmU2Afx2NA==: --dhchap-ctrl-secret DHHC-1:03:MWRiMzU0MGExOWRhZGFlMWQwOWI2MTE1YzEzNzU4MWRkOTZiM2JjNmE5ZWI4NDgyZDQwOTljYjNjMWRlNmZiMw6pYio=: 00:11:22.536 08:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:22.536 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:22.536 08:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:11:22.536 08:24:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.536 08:24:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.536 08:24:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.536 08:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:22.536 08:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:22.536 08:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:22.795 08:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:11:22.795 08:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:22.795 08:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:22.795 08:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:22.795 08:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:22.795 08:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:22.795 08:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:22.795 08:24:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.795 08:24:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.795 08:24:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.795 08:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:22.796 08:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:23.054 00:11:23.311 08:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:23.311 08:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:23.311 08:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:23.570 08:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:23.570 08:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:23.570 08:24:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:23.570 08:24:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.570 08:24:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:23.570 08:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:23.570 { 00:11:23.570 "cntlid": 35, 00:11:23.570 "qid": 0, 00:11:23.570 "state": "enabled", 00:11:23.570 "thread": "nvmf_tgt_poll_group_000", 00:11:23.570 "listen_address": { 00:11:23.570 "trtype": "TCP", 00:11:23.570 "adrfam": "IPv4", 00:11:23.570 "traddr": "10.0.0.2", 00:11:23.570 "trsvcid": "4420" 00:11:23.570 }, 00:11:23.570 "peer_address": { 00:11:23.570 "trtype": "TCP", 00:11:23.570 "adrfam": "IPv4", 00:11:23.570 "traddr": "10.0.0.1", 00:11:23.570 "trsvcid": "37542" 00:11:23.570 }, 00:11:23.570 "auth": { 00:11:23.570 "state": "completed", 00:11:23.570 "digest": "sha256", 00:11:23.570 "dhgroup": "ffdhe6144" 00:11:23.570 } 00:11:23.570 } 00:11:23.570 ]' 00:11:23.570 08:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:23.570 08:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:23.570 08:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:23.570 08:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:23.570 08:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:23.570 08:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:23.570 08:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:23.570 08:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:23.829 08:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --hostid cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-secret DHHC-1:01:ODI5ZGZkYTdjMzE4NGMwN2NiNWNiYmMwZjRhYjI4ZWSfzRk6: --dhchap-ctrl-secret DHHC-1:02:NGY2NjE5NjA4YjA2Y2U2ZmI0YTg4MzgwMmJlMTUxMWE2ODM3YzMyZWI1ZWVjNDc3IVHbOw==: 00:11:24.407 08:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:24.697 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:24.697 08:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:11:24.697 08:24:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:24.697 08:24:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.697 08:24:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:24.697 08:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:24.697 08:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:24.697 08:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:24.957 08:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:11:24.957 08:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:24.957 08:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:24.957 08:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:24.957 08:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:24.957 08:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:24.957 08:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:24.957 08:24:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:24.957 08:24:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.957 08:24:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:24.957 08:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:24.957 08:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:25.215 00:11:25.215 08:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:25.215 08:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:25.215 08:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:25.474 08:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:25.474 08:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:25.474 08:24:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:25.474 08:24:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.474 08:24:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:25.474 08:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:25.474 { 00:11:25.474 "cntlid": 37, 00:11:25.474 "qid": 0, 00:11:25.474 "state": "enabled", 00:11:25.474 "thread": "nvmf_tgt_poll_group_000", 00:11:25.474 "listen_address": { 00:11:25.474 "trtype": "TCP", 00:11:25.474 "adrfam": "IPv4", 00:11:25.474 "traddr": "10.0.0.2", 00:11:25.474 "trsvcid": "4420" 00:11:25.474 }, 00:11:25.474 "peer_address": { 00:11:25.474 "trtype": "TCP", 00:11:25.474 "adrfam": "IPv4", 00:11:25.474 "traddr": "10.0.0.1", 00:11:25.474 "trsvcid": "39396" 00:11:25.474 }, 00:11:25.474 "auth": { 00:11:25.474 "state": "completed", 00:11:25.474 "digest": "sha256", 00:11:25.474 "dhgroup": "ffdhe6144" 00:11:25.474 } 00:11:25.474 } 00:11:25.474 ]' 00:11:25.474 08:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:25.732 08:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:25.732 08:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:25.732 08:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:25.732 08:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:25.732 08:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:25.732 08:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:25.732 08:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:25.991 08:24:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --hostid cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-secret DHHC-1:02:M2QzYmVhOWVjN2U2NGIzYjkzNzFhN2IzNjNiNzMxZjFlZDZkMDJiMTBkYmNmMGZlM8IJBw==: --dhchap-ctrl-secret DHHC-1:01:ZDliZWJlYmZmOGQ5MDVmM2E5OGQ2ZmJjZDhlYjA3MGT8TygM: 00:11:26.558 08:24:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:26.559 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:26.559 08:24:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:11:26.559 08:24:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.559 08:24:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.559 08:24:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.559 08:24:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:26.559 08:24:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:26.559 08:24:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:26.817 08:24:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:11:26.817 08:24:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:26.817 08:24:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:26.817 08:24:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:26.817 08:24:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:26.817 08:24:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:26.817 08:24:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-key key3 00:11:26.817 08:24:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.817 08:24:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.817 08:24:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.817 08:24:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:26.817 08:24:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:27.384 00:11:27.384 08:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:27.384 08:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:27.384 08:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:27.643 08:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:27.643 08:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:27.643 08:24:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:27.643 08:24:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.643 08:24:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:27.643 08:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:27.643 { 00:11:27.643 "cntlid": 39, 00:11:27.643 "qid": 0, 00:11:27.643 "state": "enabled", 00:11:27.643 "thread": "nvmf_tgt_poll_group_000", 00:11:27.643 "listen_address": { 00:11:27.643 "trtype": "TCP", 00:11:27.643 "adrfam": "IPv4", 00:11:27.643 "traddr": "10.0.0.2", 00:11:27.643 "trsvcid": "4420" 00:11:27.643 }, 00:11:27.643 "peer_address": { 00:11:27.643 "trtype": "TCP", 00:11:27.643 "adrfam": "IPv4", 00:11:27.643 "traddr": "10.0.0.1", 00:11:27.643 "trsvcid": "39428" 00:11:27.643 }, 00:11:27.643 "auth": { 00:11:27.643 "state": "completed", 00:11:27.643 "digest": "sha256", 00:11:27.643 "dhgroup": "ffdhe6144" 00:11:27.643 } 00:11:27.643 } 00:11:27.643 ]' 00:11:27.643 08:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:27.643 08:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:27.643 08:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:27.901 08:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:27.901 08:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:27.901 08:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:27.901 08:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:27.901 08:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:28.158 08:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --hostid cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-secret DHHC-1:03:Y2RhYTVlMWY3ODg2NTI5Mjg3ZTU2YjI3NjhlMWJlMjg5NmU5OWRjYzRiMWVlNTBjMDU5YzhmM2Q4YmE2YmFlNKIyA44=: 00:11:28.724 08:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:28.724 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:28.984 08:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:11:28.984 08:24:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.984 08:24:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.984 08:24:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.984 08:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:28.984 08:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:28.984 08:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:28.984 08:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:29.242 08:24:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:11:29.242 08:24:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:29.242 08:24:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:29.242 08:24:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:29.242 08:24:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:29.242 08:24:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:29.242 08:24:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:29.242 08:24:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.242 08:24:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.242 08:24:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.242 08:24:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:29.242 08:24:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:29.808 00:11:29.808 08:24:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:29.808 08:24:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:29.808 08:24:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:30.067 08:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:30.067 08:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:30.067 08:24:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:30.067 08:24:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.067 08:24:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:30.067 08:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:30.067 { 00:11:30.067 "cntlid": 41, 00:11:30.067 "qid": 0, 00:11:30.067 "state": "enabled", 00:11:30.067 "thread": "nvmf_tgt_poll_group_000", 00:11:30.067 "listen_address": { 00:11:30.067 "trtype": "TCP", 00:11:30.067 "adrfam": "IPv4", 00:11:30.067 "traddr": "10.0.0.2", 00:11:30.067 "trsvcid": "4420" 00:11:30.067 }, 00:11:30.067 "peer_address": { 00:11:30.067 "trtype": "TCP", 00:11:30.067 "adrfam": "IPv4", 00:11:30.067 "traddr": "10.0.0.1", 00:11:30.067 "trsvcid": "39456" 00:11:30.067 }, 00:11:30.067 "auth": { 00:11:30.067 "state": "completed", 00:11:30.067 "digest": "sha256", 00:11:30.067 "dhgroup": "ffdhe8192" 00:11:30.067 } 00:11:30.067 } 00:11:30.067 ]' 00:11:30.067 08:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:30.067 08:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:30.067 08:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:30.067 08:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:30.067 08:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:30.067 08:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:30.067 08:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:30.067 08:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:30.326 08:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --hostid cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-secret DHHC-1:00:YjNhMmEzYTEwZTYyNTM2OTgyMDZhMzA3NTE2NTY1ZjQ2MDhhMmE0MmMzZTEzYmU2Afx2NA==: --dhchap-ctrl-secret DHHC-1:03:MWRiMzU0MGExOWRhZGFlMWQwOWI2MTE1YzEzNzU4MWRkOTZiM2JjNmE5ZWI4NDgyZDQwOTljYjNjMWRlNmZiMw6pYio=: 00:11:31.270 08:24:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:31.270 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:31.270 08:24:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:11:31.270 08:24:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:31.270 08:24:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.270 08:24:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:31.270 08:24:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:31.270 08:24:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:31.270 08:24:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:31.270 08:24:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:11:31.270 08:24:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:31.270 08:24:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:31.270 08:24:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:31.270 08:24:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:31.270 08:24:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:31.270 08:24:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:31.270 08:24:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:31.270 08:24:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.270 08:24:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:31.270 08:24:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:31.270 08:24:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:31.837 00:11:32.096 08:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:32.096 08:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:32.096 08:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:32.355 08:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:32.355 08:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:32.355 08:24:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:32.356 08:24:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.356 08:24:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:32.356 08:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:32.356 { 00:11:32.356 "cntlid": 43, 00:11:32.356 "qid": 0, 00:11:32.356 "state": "enabled", 00:11:32.356 "thread": "nvmf_tgt_poll_group_000", 00:11:32.356 "listen_address": { 00:11:32.356 "trtype": "TCP", 00:11:32.356 "adrfam": "IPv4", 00:11:32.356 "traddr": "10.0.0.2", 00:11:32.356 "trsvcid": "4420" 00:11:32.356 }, 00:11:32.356 "peer_address": { 00:11:32.356 "trtype": "TCP", 00:11:32.356 "adrfam": "IPv4", 00:11:32.356 "traddr": "10.0.0.1", 00:11:32.356 "trsvcid": "39484" 00:11:32.356 }, 00:11:32.356 "auth": { 00:11:32.356 "state": "completed", 00:11:32.356 "digest": "sha256", 00:11:32.356 "dhgroup": "ffdhe8192" 00:11:32.356 } 00:11:32.356 } 00:11:32.356 ]' 00:11:32.356 08:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:32.356 08:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:32.356 08:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:32.356 08:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:32.356 08:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:32.356 08:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:32.356 08:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:32.356 08:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:32.614 08:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --hostid cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-secret DHHC-1:01:ODI5ZGZkYTdjMzE4NGMwN2NiNWNiYmMwZjRhYjI4ZWSfzRk6: --dhchap-ctrl-secret DHHC-1:02:NGY2NjE5NjA4YjA2Y2U2ZmI0YTg4MzgwMmJlMTUxMWE2ODM3YzMyZWI1ZWVjNDc3IVHbOw==: 00:11:33.547 08:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:33.547 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:33.547 08:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:11:33.547 08:24:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.547 08:24:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.547 08:24:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.547 08:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:33.547 08:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:33.547 08:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:33.806 08:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:11:33.806 08:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:33.806 08:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:33.806 08:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:33.806 08:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:33.806 08:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:33.806 08:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:33.806 08:24:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.806 08:24:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.806 08:24:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.806 08:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:33.806 08:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:34.400 00:11:34.400 08:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:34.400 08:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:34.400 08:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:34.657 08:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:34.657 08:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:34.657 08:24:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.657 08:24:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.657 08:24:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.657 08:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:34.657 { 00:11:34.657 "cntlid": 45, 00:11:34.657 "qid": 0, 00:11:34.657 "state": "enabled", 00:11:34.657 "thread": "nvmf_tgt_poll_group_000", 00:11:34.657 "listen_address": { 00:11:34.657 "trtype": "TCP", 00:11:34.657 "adrfam": "IPv4", 00:11:34.657 "traddr": "10.0.0.2", 00:11:34.657 "trsvcid": "4420" 00:11:34.657 }, 00:11:34.657 "peer_address": { 00:11:34.657 "trtype": "TCP", 00:11:34.657 "adrfam": "IPv4", 00:11:34.657 "traddr": "10.0.0.1", 00:11:34.657 "trsvcid": "39522" 00:11:34.657 }, 00:11:34.657 "auth": { 00:11:34.657 "state": "completed", 00:11:34.657 "digest": "sha256", 00:11:34.657 "dhgroup": "ffdhe8192" 00:11:34.657 } 00:11:34.657 } 00:11:34.657 ]' 00:11:34.657 08:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:34.657 08:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:34.657 08:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:34.658 08:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:34.658 08:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:34.658 08:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:34.658 08:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:34.658 08:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:35.224 08:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --hostid cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-secret DHHC-1:02:M2QzYmVhOWVjN2U2NGIzYjkzNzFhN2IzNjNiNzMxZjFlZDZkMDJiMTBkYmNmMGZlM8IJBw==: --dhchap-ctrl-secret DHHC-1:01:ZDliZWJlYmZmOGQ5MDVmM2E5OGQ2ZmJjZDhlYjA3MGT8TygM: 00:11:35.788 08:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:35.788 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:35.788 08:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:11:35.788 08:24:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:35.788 08:24:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.788 08:24:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:35.788 08:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:35.788 08:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:35.788 08:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:36.045 08:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:11:36.045 08:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:36.045 08:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:36.045 08:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:36.045 08:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:36.045 08:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:36.045 08:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-key key3 00:11:36.045 08:24:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.045 08:24:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.045 08:24:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.045 08:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:36.045 08:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:36.609 00:11:36.609 08:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:36.609 08:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:36.609 08:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:36.866 08:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:36.866 08:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:36.866 08:24:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.866 08:24:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.866 08:24:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.866 08:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:36.866 { 00:11:36.866 "cntlid": 47, 00:11:36.866 "qid": 0, 00:11:36.866 "state": "enabled", 00:11:36.866 "thread": "nvmf_tgt_poll_group_000", 00:11:36.866 "listen_address": { 00:11:36.866 "trtype": "TCP", 00:11:36.866 "adrfam": "IPv4", 00:11:36.866 "traddr": "10.0.0.2", 00:11:36.866 "trsvcid": "4420" 00:11:36.866 }, 00:11:36.866 "peer_address": { 00:11:36.866 "trtype": "TCP", 00:11:36.866 "adrfam": "IPv4", 00:11:36.866 "traddr": "10.0.0.1", 00:11:36.866 "trsvcid": "51558" 00:11:36.866 }, 00:11:36.866 "auth": { 00:11:36.866 "state": "completed", 00:11:36.866 "digest": "sha256", 00:11:36.866 "dhgroup": "ffdhe8192" 00:11:36.866 } 00:11:36.866 } 00:11:36.866 ]' 00:11:36.866 08:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:37.123 08:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:37.123 08:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:37.123 08:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:37.123 08:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:37.123 08:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:37.123 08:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:37.123 08:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:37.379 08:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --hostid cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-secret DHHC-1:03:Y2RhYTVlMWY3ODg2NTI5Mjg3ZTU2YjI3NjhlMWJlMjg5NmU5OWRjYzRiMWVlNTBjMDU5YzhmM2Q4YmE2YmFlNKIyA44=: 00:11:38.363 08:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:38.363 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:38.363 08:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:11:38.363 08:24:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.363 08:24:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.363 08:24:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.363 08:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:11:38.364 08:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:38.364 08:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:38.364 08:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:38.364 08:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:38.364 08:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:11:38.364 08:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:38.364 08:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:38.364 08:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:38.364 08:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:38.364 08:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:38.364 08:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:38.364 08:24:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.364 08:24:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.364 08:24:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.364 08:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:38.364 08:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:38.621 00:11:38.622 08:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:38.622 08:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:38.622 08:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:39.188 08:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:39.188 08:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:39.188 08:24:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.188 08:24:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.188 08:24:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.188 08:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:39.188 { 00:11:39.188 "cntlid": 49, 00:11:39.188 "qid": 0, 00:11:39.188 "state": "enabled", 00:11:39.188 "thread": "nvmf_tgt_poll_group_000", 00:11:39.188 "listen_address": { 00:11:39.188 "trtype": "TCP", 00:11:39.188 "adrfam": "IPv4", 00:11:39.188 "traddr": "10.0.0.2", 00:11:39.188 "trsvcid": "4420" 00:11:39.188 }, 00:11:39.188 "peer_address": { 00:11:39.188 "trtype": "TCP", 00:11:39.188 "adrfam": "IPv4", 00:11:39.188 "traddr": "10.0.0.1", 00:11:39.188 "trsvcid": "51582" 00:11:39.188 }, 00:11:39.188 "auth": { 00:11:39.188 "state": "completed", 00:11:39.188 "digest": "sha384", 00:11:39.188 "dhgroup": "null" 00:11:39.188 } 00:11:39.188 } 00:11:39.188 ]' 00:11:39.188 08:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:39.188 08:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:39.188 08:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:39.188 08:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:39.188 08:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:39.188 08:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:39.188 08:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:39.188 08:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:39.446 08:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --hostid cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-secret DHHC-1:00:YjNhMmEzYTEwZTYyNTM2OTgyMDZhMzA3NTE2NTY1ZjQ2MDhhMmE0MmMzZTEzYmU2Afx2NA==: --dhchap-ctrl-secret DHHC-1:03:MWRiMzU0MGExOWRhZGFlMWQwOWI2MTE1YzEzNzU4MWRkOTZiM2JjNmE5ZWI4NDgyZDQwOTljYjNjMWRlNmZiMw6pYio=: 00:11:40.012 08:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:40.012 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:40.012 08:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:11:40.012 08:24:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.012 08:24:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.012 08:24:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.012 08:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:40.012 08:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:40.012 08:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:40.270 08:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:11:40.270 08:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:40.270 08:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:40.270 08:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:40.270 08:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:40.270 08:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:40.270 08:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:40.270 08:24:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.270 08:24:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.528 08:24:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.528 08:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:40.528 08:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:40.786 00:11:40.786 08:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:40.786 08:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:40.786 08:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:41.045 08:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:41.045 08:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:41.045 08:24:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.045 08:24:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.045 08:24:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.045 08:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:41.045 { 00:11:41.045 "cntlid": 51, 00:11:41.045 "qid": 0, 00:11:41.045 "state": "enabled", 00:11:41.045 "thread": "nvmf_tgt_poll_group_000", 00:11:41.045 "listen_address": { 00:11:41.045 "trtype": "TCP", 00:11:41.045 "adrfam": "IPv4", 00:11:41.045 "traddr": "10.0.0.2", 00:11:41.045 "trsvcid": "4420" 00:11:41.045 }, 00:11:41.045 "peer_address": { 00:11:41.045 "trtype": "TCP", 00:11:41.045 "adrfam": "IPv4", 00:11:41.045 "traddr": "10.0.0.1", 00:11:41.045 "trsvcid": "51614" 00:11:41.045 }, 00:11:41.045 "auth": { 00:11:41.045 "state": "completed", 00:11:41.045 "digest": "sha384", 00:11:41.045 "dhgroup": "null" 00:11:41.045 } 00:11:41.045 } 00:11:41.045 ]' 00:11:41.045 08:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:41.045 08:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:41.045 08:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:41.045 08:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:41.045 08:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:41.045 08:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:41.045 08:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:41.045 08:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:41.303 08:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --hostid cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-secret DHHC-1:01:ODI5ZGZkYTdjMzE4NGMwN2NiNWNiYmMwZjRhYjI4ZWSfzRk6: --dhchap-ctrl-secret DHHC-1:02:NGY2NjE5NjA4YjA2Y2U2ZmI0YTg4MzgwMmJlMTUxMWE2ODM3YzMyZWI1ZWVjNDc3IVHbOw==: 00:11:42.240 08:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:42.240 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:42.240 08:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:11:42.240 08:24:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.240 08:24:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.240 08:24:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.240 08:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:42.240 08:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:42.240 08:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:42.240 08:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:11:42.240 08:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:42.240 08:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:42.240 08:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:42.240 08:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:42.240 08:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:42.240 08:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:42.240 08:24:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.240 08:24:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.240 08:24:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.240 08:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:42.240 08:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:42.499 00:11:42.499 08:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:42.499 08:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:42.499 08:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:42.757 08:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:42.757 08:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:42.757 08:24:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.757 08:24:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.757 08:24:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.757 08:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:42.757 { 00:11:42.757 "cntlid": 53, 00:11:42.757 "qid": 0, 00:11:42.757 "state": "enabled", 00:11:42.757 "thread": "nvmf_tgt_poll_group_000", 00:11:42.757 "listen_address": { 00:11:42.757 "trtype": "TCP", 00:11:42.757 "adrfam": "IPv4", 00:11:42.757 "traddr": "10.0.0.2", 00:11:42.757 "trsvcid": "4420" 00:11:42.757 }, 00:11:42.757 "peer_address": { 00:11:42.757 "trtype": "TCP", 00:11:42.757 "adrfam": "IPv4", 00:11:42.757 "traddr": "10.0.0.1", 00:11:42.757 "trsvcid": "51648" 00:11:42.757 }, 00:11:42.757 "auth": { 00:11:42.757 "state": "completed", 00:11:42.757 "digest": "sha384", 00:11:42.757 "dhgroup": "null" 00:11:42.757 } 00:11:42.757 } 00:11:42.757 ]' 00:11:42.757 08:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:43.016 08:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:43.016 08:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:43.016 08:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:43.016 08:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:43.016 08:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:43.016 08:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:43.016 08:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:43.275 08:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --hostid cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-secret DHHC-1:02:M2QzYmVhOWVjN2U2NGIzYjkzNzFhN2IzNjNiNzMxZjFlZDZkMDJiMTBkYmNmMGZlM8IJBw==: --dhchap-ctrl-secret DHHC-1:01:ZDliZWJlYmZmOGQ5MDVmM2E5OGQ2ZmJjZDhlYjA3MGT8TygM: 00:11:43.842 08:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:43.842 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:43.842 08:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:11:43.842 08:24:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:43.842 08:24:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.842 08:24:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:43.842 08:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:43.842 08:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:43.843 08:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:44.101 08:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:11:44.101 08:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:44.101 08:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:44.101 08:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:44.101 08:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:44.101 08:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:44.101 08:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-key key3 00:11:44.101 08:24:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.101 08:24:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.101 08:24:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.101 08:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:44.101 08:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:44.668 00:11:44.668 08:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:44.668 08:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:44.668 08:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:44.926 08:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:44.926 08:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:44.926 08:24:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.926 08:24:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.926 08:24:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.926 08:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:44.926 { 00:11:44.926 "cntlid": 55, 00:11:44.926 "qid": 0, 00:11:44.926 "state": "enabled", 00:11:44.926 "thread": "nvmf_tgt_poll_group_000", 00:11:44.926 "listen_address": { 00:11:44.926 "trtype": "TCP", 00:11:44.926 "adrfam": "IPv4", 00:11:44.926 "traddr": "10.0.0.2", 00:11:44.926 "trsvcid": "4420" 00:11:44.926 }, 00:11:44.926 "peer_address": { 00:11:44.926 "trtype": "TCP", 00:11:44.926 "adrfam": "IPv4", 00:11:44.926 "traddr": "10.0.0.1", 00:11:44.926 "trsvcid": "50900" 00:11:44.926 }, 00:11:44.926 "auth": { 00:11:44.926 "state": "completed", 00:11:44.926 "digest": "sha384", 00:11:44.926 "dhgroup": "null" 00:11:44.926 } 00:11:44.926 } 00:11:44.926 ]' 00:11:44.926 08:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:44.926 08:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:44.926 08:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:44.926 08:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:44.926 08:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:44.926 08:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:44.926 08:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:44.926 08:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:45.184 08:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --hostid cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-secret DHHC-1:03:Y2RhYTVlMWY3ODg2NTI5Mjg3ZTU2YjI3NjhlMWJlMjg5NmU5OWRjYzRiMWVlNTBjMDU5YzhmM2Q4YmE2YmFlNKIyA44=: 00:11:45.749 08:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:45.749 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:45.749 08:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:11:45.749 08:24:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:45.749 08:24:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.749 08:24:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:45.749 08:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:45.750 08:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:45.750 08:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:45.750 08:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:46.008 08:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:11:46.008 08:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:46.008 08:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:46.008 08:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:46.008 08:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:46.008 08:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:46.008 08:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:46.008 08:24:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:46.008 08:24:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.267 08:24:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:46.267 08:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:46.267 08:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:46.525 00:11:46.525 08:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:46.525 08:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:46.525 08:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:46.783 08:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:46.783 08:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:46.783 08:24:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:46.783 08:24:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.783 08:24:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:46.783 08:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:46.783 { 00:11:46.783 "cntlid": 57, 00:11:46.783 "qid": 0, 00:11:46.783 "state": "enabled", 00:11:46.783 "thread": "nvmf_tgt_poll_group_000", 00:11:46.783 "listen_address": { 00:11:46.783 "trtype": "TCP", 00:11:46.783 "adrfam": "IPv4", 00:11:46.783 "traddr": "10.0.0.2", 00:11:46.783 "trsvcid": "4420" 00:11:46.783 }, 00:11:46.783 "peer_address": { 00:11:46.783 "trtype": "TCP", 00:11:46.783 "adrfam": "IPv4", 00:11:46.783 "traddr": "10.0.0.1", 00:11:46.783 "trsvcid": "50916" 00:11:46.783 }, 00:11:46.783 "auth": { 00:11:46.783 "state": "completed", 00:11:46.783 "digest": "sha384", 00:11:46.783 "dhgroup": "ffdhe2048" 00:11:46.783 } 00:11:46.783 } 00:11:46.783 ]' 00:11:46.783 08:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:46.783 08:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:46.783 08:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:46.783 08:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:46.783 08:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:47.042 08:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:47.042 08:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:47.042 08:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:47.300 08:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --hostid cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-secret DHHC-1:00:YjNhMmEzYTEwZTYyNTM2OTgyMDZhMzA3NTE2NTY1ZjQ2MDhhMmE0MmMzZTEzYmU2Afx2NA==: --dhchap-ctrl-secret DHHC-1:03:MWRiMzU0MGExOWRhZGFlMWQwOWI2MTE1YzEzNzU4MWRkOTZiM2JjNmE5ZWI4NDgyZDQwOTljYjNjMWRlNmZiMw6pYio=: 00:11:47.868 08:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:47.868 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:47.868 08:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:11:47.868 08:24:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.868 08:24:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.868 08:24:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.868 08:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:47.868 08:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:47.868 08:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:48.126 08:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:11:48.126 08:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:48.126 08:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:48.126 08:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:48.126 08:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:48.127 08:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:48.127 08:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:48.127 08:24:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.127 08:24:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.127 08:24:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.127 08:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:48.127 08:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:48.693 00:11:48.693 08:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:48.693 08:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:48.693 08:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:48.693 08:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:48.693 08:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:48.693 08:24:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.693 08:24:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.951 08:24:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.951 08:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:48.951 { 00:11:48.951 "cntlid": 59, 00:11:48.951 "qid": 0, 00:11:48.951 "state": "enabled", 00:11:48.951 "thread": "nvmf_tgt_poll_group_000", 00:11:48.951 "listen_address": { 00:11:48.951 "trtype": "TCP", 00:11:48.951 "adrfam": "IPv4", 00:11:48.951 "traddr": "10.0.0.2", 00:11:48.951 "trsvcid": "4420" 00:11:48.951 }, 00:11:48.951 "peer_address": { 00:11:48.951 "trtype": "TCP", 00:11:48.951 "adrfam": "IPv4", 00:11:48.951 "traddr": "10.0.0.1", 00:11:48.951 "trsvcid": "50950" 00:11:48.951 }, 00:11:48.951 "auth": { 00:11:48.951 "state": "completed", 00:11:48.951 "digest": "sha384", 00:11:48.951 "dhgroup": "ffdhe2048" 00:11:48.951 } 00:11:48.951 } 00:11:48.951 ]' 00:11:48.951 08:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:48.951 08:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:48.951 08:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:48.951 08:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:48.952 08:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:48.952 08:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:48.952 08:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:48.952 08:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:49.210 08:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --hostid cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-secret DHHC-1:01:ODI5ZGZkYTdjMzE4NGMwN2NiNWNiYmMwZjRhYjI4ZWSfzRk6: --dhchap-ctrl-secret DHHC-1:02:NGY2NjE5NjA4YjA2Y2U2ZmI0YTg4MzgwMmJlMTUxMWE2ODM3YzMyZWI1ZWVjNDc3IVHbOw==: 00:11:49.801 08:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:49.801 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:49.801 08:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:11:49.801 08:24:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.801 08:24:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.059 08:24:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:50.060 08:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:50.060 08:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:50.060 08:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:50.318 08:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:11:50.318 08:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:50.318 08:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:50.318 08:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:50.318 08:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:50.318 08:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:50.318 08:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:50.318 08:24:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:50.318 08:24:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.318 08:24:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:50.318 08:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:50.318 08:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:50.576 00:11:50.576 08:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:50.576 08:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:50.576 08:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:50.835 08:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:50.835 08:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:50.835 08:24:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:50.835 08:24:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.835 08:24:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:50.835 08:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:50.835 { 00:11:50.835 "cntlid": 61, 00:11:50.835 "qid": 0, 00:11:50.835 "state": "enabled", 00:11:50.835 "thread": "nvmf_tgt_poll_group_000", 00:11:50.835 "listen_address": { 00:11:50.835 "trtype": "TCP", 00:11:50.835 "adrfam": "IPv4", 00:11:50.835 "traddr": "10.0.0.2", 00:11:50.835 "trsvcid": "4420" 00:11:50.835 }, 00:11:50.835 "peer_address": { 00:11:50.835 "trtype": "TCP", 00:11:50.835 "adrfam": "IPv4", 00:11:50.835 "traddr": "10.0.0.1", 00:11:50.835 "trsvcid": "50976" 00:11:50.835 }, 00:11:50.835 "auth": { 00:11:50.835 "state": "completed", 00:11:50.835 "digest": "sha384", 00:11:50.835 "dhgroup": "ffdhe2048" 00:11:50.835 } 00:11:50.835 } 00:11:50.835 ]' 00:11:50.835 08:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:50.835 08:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:50.835 08:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:51.094 08:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:51.094 08:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:51.094 08:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:51.094 08:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:51.094 08:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:51.351 08:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --hostid cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-secret DHHC-1:02:M2QzYmVhOWVjN2U2NGIzYjkzNzFhN2IzNjNiNzMxZjFlZDZkMDJiMTBkYmNmMGZlM8IJBw==: --dhchap-ctrl-secret DHHC-1:01:ZDliZWJlYmZmOGQ5MDVmM2E5OGQ2ZmJjZDhlYjA3MGT8TygM: 00:11:51.915 08:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:51.915 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:51.915 08:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:11:51.915 08:24:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.915 08:24:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.915 08:24:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:51.915 08:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:51.915 08:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:51.915 08:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:52.175 08:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:11:52.175 08:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:52.175 08:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:52.175 08:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:52.175 08:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:52.175 08:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:52.175 08:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-key key3 00:11:52.175 08:24:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:52.175 08:24:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.175 08:24:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:52.175 08:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:52.175 08:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:52.433 00:11:52.433 08:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:52.433 08:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:52.433 08:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:52.692 08:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:52.692 08:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:52.692 08:24:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:52.692 08:24:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.692 08:24:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:52.692 08:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:52.692 { 00:11:52.692 "cntlid": 63, 00:11:52.692 "qid": 0, 00:11:52.692 "state": "enabled", 00:11:52.692 "thread": "nvmf_tgt_poll_group_000", 00:11:52.692 "listen_address": { 00:11:52.692 "trtype": "TCP", 00:11:52.692 "adrfam": "IPv4", 00:11:52.692 "traddr": "10.0.0.2", 00:11:52.692 "trsvcid": "4420" 00:11:52.692 }, 00:11:52.692 "peer_address": { 00:11:52.692 "trtype": "TCP", 00:11:52.692 "adrfam": "IPv4", 00:11:52.692 "traddr": "10.0.0.1", 00:11:52.692 "trsvcid": "50996" 00:11:52.692 }, 00:11:52.692 "auth": { 00:11:52.692 "state": "completed", 00:11:52.692 "digest": "sha384", 00:11:52.692 "dhgroup": "ffdhe2048" 00:11:52.692 } 00:11:52.692 } 00:11:52.692 ]' 00:11:52.950 08:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:52.950 08:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:52.950 08:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:52.950 08:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:52.950 08:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:52.950 08:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:52.950 08:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:52.950 08:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:53.221 08:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --hostid cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-secret DHHC-1:03:Y2RhYTVlMWY3ODg2NTI5Mjg3ZTU2YjI3NjhlMWJlMjg5NmU5OWRjYzRiMWVlNTBjMDU5YzhmM2Q4YmE2YmFlNKIyA44=: 00:11:54.156 08:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:54.157 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:54.157 08:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:11:54.157 08:24:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:54.157 08:24:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.157 08:24:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:54.157 08:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:54.157 08:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:54.157 08:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:54.157 08:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:54.157 08:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:11:54.157 08:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:54.157 08:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:54.157 08:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:54.157 08:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:54.157 08:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:54.157 08:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:54.157 08:24:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:54.157 08:24:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.157 08:24:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:54.157 08:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:54.157 08:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:54.724 00:11:54.724 08:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:54.724 08:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:54.724 08:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:54.983 08:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:54.983 08:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:54.983 08:24:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:54.983 08:24:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.983 08:24:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:54.983 08:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:54.983 { 00:11:54.983 "cntlid": 65, 00:11:54.983 "qid": 0, 00:11:54.983 "state": "enabled", 00:11:54.983 "thread": "nvmf_tgt_poll_group_000", 00:11:54.983 "listen_address": { 00:11:54.983 "trtype": "TCP", 00:11:54.983 "adrfam": "IPv4", 00:11:54.983 "traddr": "10.0.0.2", 00:11:54.983 "trsvcid": "4420" 00:11:54.983 }, 00:11:54.983 "peer_address": { 00:11:54.983 "trtype": "TCP", 00:11:54.983 "adrfam": "IPv4", 00:11:54.983 "traddr": "10.0.0.1", 00:11:54.983 "trsvcid": "59952" 00:11:54.983 }, 00:11:54.983 "auth": { 00:11:54.983 "state": "completed", 00:11:54.983 "digest": "sha384", 00:11:54.983 "dhgroup": "ffdhe3072" 00:11:54.983 } 00:11:54.983 } 00:11:54.983 ]' 00:11:54.983 08:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:54.983 08:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:54.983 08:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:54.983 08:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:54.983 08:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:54.983 08:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:54.983 08:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:54.983 08:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:55.242 08:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --hostid cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-secret DHHC-1:00:YjNhMmEzYTEwZTYyNTM2OTgyMDZhMzA3NTE2NTY1ZjQ2MDhhMmE0MmMzZTEzYmU2Afx2NA==: --dhchap-ctrl-secret DHHC-1:03:MWRiMzU0MGExOWRhZGFlMWQwOWI2MTE1YzEzNzU4MWRkOTZiM2JjNmE5ZWI4NDgyZDQwOTljYjNjMWRlNmZiMw6pYio=: 00:11:56.178 08:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:56.178 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:56.178 08:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:11:56.178 08:24:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.178 08:24:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.178 08:24:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.178 08:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:56.178 08:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:56.178 08:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:56.436 08:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:11:56.436 08:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:56.436 08:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:56.436 08:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:56.436 08:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:56.436 08:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:56.436 08:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:56.436 08:24:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.436 08:24:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.436 08:24:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.436 08:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:56.436 08:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:56.693 00:11:56.693 08:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:56.693 08:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:56.694 08:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:56.950 08:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:56.950 08:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:56.950 08:24:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.950 08:24:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.950 08:24:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.950 08:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:56.950 { 00:11:56.950 "cntlid": 67, 00:11:56.950 "qid": 0, 00:11:56.950 "state": "enabled", 00:11:56.950 "thread": "nvmf_tgt_poll_group_000", 00:11:56.950 "listen_address": { 00:11:56.950 "trtype": "TCP", 00:11:56.950 "adrfam": "IPv4", 00:11:56.950 "traddr": "10.0.0.2", 00:11:56.950 "trsvcid": "4420" 00:11:56.950 }, 00:11:56.950 "peer_address": { 00:11:56.950 "trtype": "TCP", 00:11:56.950 "adrfam": "IPv4", 00:11:56.950 "traddr": "10.0.0.1", 00:11:56.950 "trsvcid": "59970" 00:11:56.950 }, 00:11:56.950 "auth": { 00:11:56.950 "state": "completed", 00:11:56.950 "digest": "sha384", 00:11:56.950 "dhgroup": "ffdhe3072" 00:11:56.950 } 00:11:56.950 } 00:11:56.950 ]' 00:11:56.950 08:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:56.950 08:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:56.950 08:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:56.950 08:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:56.950 08:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:56.950 08:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:56.950 08:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:56.950 08:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:57.208 08:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --hostid cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-secret DHHC-1:01:ODI5ZGZkYTdjMzE4NGMwN2NiNWNiYmMwZjRhYjI4ZWSfzRk6: --dhchap-ctrl-secret DHHC-1:02:NGY2NjE5NjA4YjA2Y2U2ZmI0YTg4MzgwMmJlMTUxMWE2ODM3YzMyZWI1ZWVjNDc3IVHbOw==: 00:11:58.206 08:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:58.206 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:58.206 08:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:11:58.206 08:24:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.206 08:24:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.206 08:24:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.206 08:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:58.206 08:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:58.206 08:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:58.206 08:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:11:58.206 08:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:58.206 08:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:58.206 08:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:58.206 08:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:58.206 08:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:58.206 08:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:58.206 08:24:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.206 08:24:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.206 08:24:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.207 08:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:58.207 08:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:58.771 00:11:58.772 08:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:58.772 08:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:58.772 08:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:59.029 08:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:59.029 08:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:59.029 08:24:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:59.029 08:24:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.029 08:24:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:59.029 08:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:59.029 { 00:11:59.029 "cntlid": 69, 00:11:59.029 "qid": 0, 00:11:59.029 "state": "enabled", 00:11:59.029 "thread": "nvmf_tgt_poll_group_000", 00:11:59.029 "listen_address": { 00:11:59.029 "trtype": "TCP", 00:11:59.029 "adrfam": "IPv4", 00:11:59.029 "traddr": "10.0.0.2", 00:11:59.029 "trsvcid": "4420" 00:11:59.029 }, 00:11:59.029 "peer_address": { 00:11:59.029 "trtype": "TCP", 00:11:59.029 "adrfam": "IPv4", 00:11:59.029 "traddr": "10.0.0.1", 00:11:59.029 "trsvcid": "60008" 00:11:59.029 }, 00:11:59.029 "auth": { 00:11:59.029 "state": "completed", 00:11:59.029 "digest": "sha384", 00:11:59.029 "dhgroup": "ffdhe3072" 00:11:59.029 } 00:11:59.029 } 00:11:59.029 ]' 00:11:59.029 08:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:59.029 08:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:59.029 08:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:59.029 08:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:59.029 08:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:59.029 08:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:59.029 08:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:59.029 08:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:59.286 08:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --hostid cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-secret DHHC-1:02:M2QzYmVhOWVjN2U2NGIzYjkzNzFhN2IzNjNiNzMxZjFlZDZkMDJiMTBkYmNmMGZlM8IJBw==: --dhchap-ctrl-secret DHHC-1:01:ZDliZWJlYmZmOGQ5MDVmM2E5OGQ2ZmJjZDhlYjA3MGT8TygM: 00:12:00.219 08:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:00.219 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:00.219 08:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:12:00.219 08:24:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:00.219 08:24:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.219 08:24:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:00.219 08:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:00.219 08:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:00.219 08:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:00.219 08:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:12:00.219 08:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:00.219 08:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:00.219 08:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:00.219 08:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:00.219 08:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:00.219 08:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-key key3 00:12:00.219 08:24:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:00.219 08:24:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.219 08:24:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:00.219 08:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:00.219 08:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:00.785 00:12:00.785 08:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:00.785 08:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:00.785 08:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:01.044 08:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:01.044 08:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:01.044 08:24:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.044 08:24:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.044 08:24:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.044 08:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:01.044 { 00:12:01.044 "cntlid": 71, 00:12:01.044 "qid": 0, 00:12:01.044 "state": "enabled", 00:12:01.044 "thread": "nvmf_tgt_poll_group_000", 00:12:01.044 "listen_address": { 00:12:01.044 "trtype": "TCP", 00:12:01.044 "adrfam": "IPv4", 00:12:01.044 "traddr": "10.0.0.2", 00:12:01.044 "trsvcid": "4420" 00:12:01.044 }, 00:12:01.044 "peer_address": { 00:12:01.044 "trtype": "TCP", 00:12:01.044 "adrfam": "IPv4", 00:12:01.044 "traddr": "10.0.0.1", 00:12:01.044 "trsvcid": "60048" 00:12:01.044 }, 00:12:01.044 "auth": { 00:12:01.044 "state": "completed", 00:12:01.044 "digest": "sha384", 00:12:01.044 "dhgroup": "ffdhe3072" 00:12:01.044 } 00:12:01.044 } 00:12:01.044 ]' 00:12:01.044 08:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:01.044 08:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:01.044 08:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:01.044 08:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:01.044 08:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:01.044 08:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:01.044 08:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:01.044 08:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:01.302 08:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --hostid cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-secret DHHC-1:03:Y2RhYTVlMWY3ODg2NTI5Mjg3ZTU2YjI3NjhlMWJlMjg5NmU5OWRjYzRiMWVlNTBjMDU5YzhmM2Q4YmE2YmFlNKIyA44=: 00:12:01.870 08:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:02.146 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:02.146 08:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:12:02.146 08:24:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.146 08:24:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.146 08:24:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.146 08:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:02.146 08:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:02.146 08:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:02.146 08:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:02.405 08:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:12:02.405 08:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:02.405 08:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:02.405 08:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:02.405 08:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:02.405 08:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:02.405 08:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:02.405 08:24:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.405 08:24:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.405 08:24:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.405 08:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:02.405 08:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:02.664 00:12:02.664 08:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:02.664 08:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:02.664 08:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:02.923 08:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:02.923 08:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:02.923 08:24:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.923 08:24:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.923 08:24:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.923 08:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:02.923 { 00:12:02.923 "cntlid": 73, 00:12:02.923 "qid": 0, 00:12:02.923 "state": "enabled", 00:12:02.923 "thread": "nvmf_tgt_poll_group_000", 00:12:02.923 "listen_address": { 00:12:02.923 "trtype": "TCP", 00:12:02.923 "adrfam": "IPv4", 00:12:02.923 "traddr": "10.0.0.2", 00:12:02.923 "trsvcid": "4420" 00:12:02.923 }, 00:12:02.923 "peer_address": { 00:12:02.923 "trtype": "TCP", 00:12:02.923 "adrfam": "IPv4", 00:12:02.923 "traddr": "10.0.0.1", 00:12:02.923 "trsvcid": "60076" 00:12:02.923 }, 00:12:02.923 "auth": { 00:12:02.923 "state": "completed", 00:12:02.923 "digest": "sha384", 00:12:02.923 "dhgroup": "ffdhe4096" 00:12:02.923 } 00:12:02.923 } 00:12:02.923 ]' 00:12:02.923 08:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:02.923 08:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:02.923 08:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:03.182 08:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:03.182 08:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:03.182 08:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:03.182 08:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:03.182 08:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:03.440 08:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --hostid cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-secret DHHC-1:00:YjNhMmEzYTEwZTYyNTM2OTgyMDZhMzA3NTE2NTY1ZjQ2MDhhMmE0MmMzZTEzYmU2Afx2NA==: --dhchap-ctrl-secret DHHC-1:03:MWRiMzU0MGExOWRhZGFlMWQwOWI2MTE1YzEzNzU4MWRkOTZiM2JjNmE5ZWI4NDgyZDQwOTljYjNjMWRlNmZiMw6pYio=: 00:12:04.007 08:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:04.007 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:04.007 08:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:12:04.007 08:24:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.007 08:24:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.007 08:24:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.007 08:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:04.007 08:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:04.007 08:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:04.266 08:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:12:04.266 08:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:04.266 08:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:04.266 08:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:04.266 08:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:04.266 08:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:04.267 08:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:04.267 08:24:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.267 08:24:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.267 08:24:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.267 08:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:04.267 08:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:04.834 00:12:04.834 08:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:04.834 08:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:04.834 08:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:04.834 08:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:04.834 08:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:04.834 08:24:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.834 08:24:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.834 08:24:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.834 08:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:04.834 { 00:12:04.834 "cntlid": 75, 00:12:04.834 "qid": 0, 00:12:04.834 "state": "enabled", 00:12:04.834 "thread": "nvmf_tgt_poll_group_000", 00:12:04.834 "listen_address": { 00:12:04.834 "trtype": "TCP", 00:12:04.834 "adrfam": "IPv4", 00:12:04.834 "traddr": "10.0.0.2", 00:12:04.834 "trsvcid": "4420" 00:12:04.834 }, 00:12:04.834 "peer_address": { 00:12:04.834 "trtype": "TCP", 00:12:04.834 "adrfam": "IPv4", 00:12:04.834 "traddr": "10.0.0.1", 00:12:04.834 "trsvcid": "39324" 00:12:04.834 }, 00:12:04.834 "auth": { 00:12:04.834 "state": "completed", 00:12:04.834 "digest": "sha384", 00:12:04.834 "dhgroup": "ffdhe4096" 00:12:04.834 } 00:12:04.834 } 00:12:04.834 ]' 00:12:04.834 08:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:05.124 08:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:05.124 08:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:05.124 08:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:05.124 08:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:05.124 08:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:05.124 08:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:05.124 08:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:05.383 08:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --hostid cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-secret DHHC-1:01:ODI5ZGZkYTdjMzE4NGMwN2NiNWNiYmMwZjRhYjI4ZWSfzRk6: --dhchap-ctrl-secret DHHC-1:02:NGY2NjE5NjA4YjA2Y2U2ZmI0YTg4MzgwMmJlMTUxMWE2ODM3YzMyZWI1ZWVjNDc3IVHbOw==: 00:12:05.949 08:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:05.949 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:05.949 08:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:12:05.949 08:24:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.949 08:24:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.949 08:24:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.949 08:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:05.949 08:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:05.949 08:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:06.207 08:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:12:06.207 08:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:06.207 08:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:06.207 08:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:06.207 08:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:06.207 08:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:06.207 08:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:06.207 08:24:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.207 08:24:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.207 08:24:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.207 08:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:06.207 08:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:06.465 00:12:06.465 08:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:06.465 08:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:06.465 08:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:06.723 08:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:06.723 08:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:06.723 08:24:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.723 08:24:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.723 08:24:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.723 08:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:06.723 { 00:12:06.723 "cntlid": 77, 00:12:06.723 "qid": 0, 00:12:06.723 "state": "enabled", 00:12:06.723 "thread": "nvmf_tgt_poll_group_000", 00:12:06.723 "listen_address": { 00:12:06.723 "trtype": "TCP", 00:12:06.723 "adrfam": "IPv4", 00:12:06.723 "traddr": "10.0.0.2", 00:12:06.723 "trsvcid": "4420" 00:12:06.723 }, 00:12:06.723 "peer_address": { 00:12:06.723 "trtype": "TCP", 00:12:06.723 "adrfam": "IPv4", 00:12:06.723 "traddr": "10.0.0.1", 00:12:06.723 "trsvcid": "39356" 00:12:06.723 }, 00:12:06.723 "auth": { 00:12:06.723 "state": "completed", 00:12:06.723 "digest": "sha384", 00:12:06.723 "dhgroup": "ffdhe4096" 00:12:06.723 } 00:12:06.723 } 00:12:06.723 ]' 00:12:06.723 08:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:06.982 08:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:06.982 08:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:06.982 08:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:06.982 08:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:06.982 08:24:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:06.982 08:24:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:06.982 08:24:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:07.241 08:24:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --hostid cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-secret DHHC-1:02:M2QzYmVhOWVjN2U2NGIzYjkzNzFhN2IzNjNiNzMxZjFlZDZkMDJiMTBkYmNmMGZlM8IJBw==: --dhchap-ctrl-secret DHHC-1:01:ZDliZWJlYmZmOGQ5MDVmM2E5OGQ2ZmJjZDhlYjA3MGT8TygM: 00:12:08.177 08:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:08.177 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:08.177 08:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:12:08.177 08:25:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.177 08:25:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.177 08:25:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.177 08:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:08.177 08:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:08.177 08:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:08.434 08:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:12:08.434 08:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:08.434 08:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:08.434 08:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:08.434 08:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:08.434 08:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:08.434 08:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-key key3 00:12:08.434 08:25:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.434 08:25:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.434 08:25:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.434 08:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:08.434 08:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:08.692 00:12:08.692 08:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:08.692 08:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:08.692 08:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:08.950 08:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:08.950 08:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:08.950 08:25:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.950 08:25:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.950 08:25:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.950 08:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:08.950 { 00:12:08.950 "cntlid": 79, 00:12:08.950 "qid": 0, 00:12:08.950 "state": "enabled", 00:12:08.950 "thread": "nvmf_tgt_poll_group_000", 00:12:08.950 "listen_address": { 00:12:08.950 "trtype": "TCP", 00:12:08.950 "adrfam": "IPv4", 00:12:08.950 "traddr": "10.0.0.2", 00:12:08.950 "trsvcid": "4420" 00:12:08.950 }, 00:12:08.950 "peer_address": { 00:12:08.950 "trtype": "TCP", 00:12:08.950 "adrfam": "IPv4", 00:12:08.950 "traddr": "10.0.0.1", 00:12:08.950 "trsvcid": "39372" 00:12:08.950 }, 00:12:08.950 "auth": { 00:12:08.950 "state": "completed", 00:12:08.950 "digest": "sha384", 00:12:08.950 "dhgroup": "ffdhe4096" 00:12:08.950 } 00:12:08.950 } 00:12:08.950 ]' 00:12:08.950 08:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:09.208 08:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:09.208 08:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:09.208 08:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:09.208 08:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:09.208 08:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:09.208 08:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:09.208 08:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:09.468 08:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --hostid cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-secret DHHC-1:03:Y2RhYTVlMWY3ODg2NTI5Mjg3ZTU2YjI3NjhlMWJlMjg5NmU5OWRjYzRiMWVlNTBjMDU5YzhmM2Q4YmE2YmFlNKIyA44=: 00:12:10.035 08:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:10.035 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:10.035 08:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:12:10.035 08:25:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:10.035 08:25:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.035 08:25:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:10.035 08:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:10.035 08:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:10.035 08:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:10.035 08:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:10.294 08:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:12:10.294 08:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:10.294 08:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:10.294 08:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:10.294 08:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:10.294 08:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:10.294 08:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:10.294 08:25:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:10.294 08:25:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.294 08:25:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:10.294 08:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:10.294 08:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:10.862 00:12:10.862 08:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:10.862 08:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:10.862 08:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:11.120 08:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:11.120 08:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:11.120 08:25:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.120 08:25:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.120 08:25:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.120 08:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:11.120 { 00:12:11.120 "cntlid": 81, 00:12:11.120 "qid": 0, 00:12:11.120 "state": "enabled", 00:12:11.120 "thread": "nvmf_tgt_poll_group_000", 00:12:11.120 "listen_address": { 00:12:11.120 "trtype": "TCP", 00:12:11.120 "adrfam": "IPv4", 00:12:11.120 "traddr": "10.0.0.2", 00:12:11.120 "trsvcid": "4420" 00:12:11.120 }, 00:12:11.120 "peer_address": { 00:12:11.120 "trtype": "TCP", 00:12:11.120 "adrfam": "IPv4", 00:12:11.120 "traddr": "10.0.0.1", 00:12:11.120 "trsvcid": "39406" 00:12:11.120 }, 00:12:11.120 "auth": { 00:12:11.120 "state": "completed", 00:12:11.120 "digest": "sha384", 00:12:11.120 "dhgroup": "ffdhe6144" 00:12:11.120 } 00:12:11.120 } 00:12:11.120 ]' 00:12:11.120 08:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:11.120 08:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:11.120 08:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:11.120 08:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:11.120 08:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:11.378 08:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:11.378 08:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:11.378 08:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:11.636 08:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --hostid cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-secret DHHC-1:00:YjNhMmEzYTEwZTYyNTM2OTgyMDZhMzA3NTE2NTY1ZjQ2MDhhMmE0MmMzZTEzYmU2Afx2NA==: --dhchap-ctrl-secret DHHC-1:03:MWRiMzU0MGExOWRhZGFlMWQwOWI2MTE1YzEzNzU4MWRkOTZiM2JjNmE5ZWI4NDgyZDQwOTljYjNjMWRlNmZiMw6pYio=: 00:12:12.214 08:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:12.214 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:12.214 08:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:12:12.214 08:25:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.214 08:25:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.214 08:25:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.214 08:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:12.214 08:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:12.214 08:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:12.487 08:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:12:12.488 08:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:12.488 08:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:12.488 08:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:12.488 08:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:12.488 08:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:12.488 08:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:12.488 08:25:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.488 08:25:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.488 08:25:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.488 08:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:12.488 08:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:13.053 00:12:13.053 08:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:13.053 08:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:13.053 08:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:13.312 08:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:13.312 08:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:13.312 08:25:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.312 08:25:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.312 08:25:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.312 08:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:13.312 { 00:12:13.312 "cntlid": 83, 00:12:13.312 "qid": 0, 00:12:13.312 "state": "enabled", 00:12:13.312 "thread": "nvmf_tgt_poll_group_000", 00:12:13.312 "listen_address": { 00:12:13.312 "trtype": "TCP", 00:12:13.312 "adrfam": "IPv4", 00:12:13.312 "traddr": "10.0.0.2", 00:12:13.312 "trsvcid": "4420" 00:12:13.312 }, 00:12:13.312 "peer_address": { 00:12:13.312 "trtype": "TCP", 00:12:13.312 "adrfam": "IPv4", 00:12:13.312 "traddr": "10.0.0.1", 00:12:13.312 "trsvcid": "39430" 00:12:13.312 }, 00:12:13.312 "auth": { 00:12:13.312 "state": "completed", 00:12:13.312 "digest": "sha384", 00:12:13.312 "dhgroup": "ffdhe6144" 00:12:13.312 } 00:12:13.312 } 00:12:13.312 ]' 00:12:13.312 08:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:13.312 08:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:13.312 08:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:13.312 08:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:13.312 08:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:13.571 08:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:13.571 08:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:13.571 08:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:13.829 08:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --hostid cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-secret DHHC-1:01:ODI5ZGZkYTdjMzE4NGMwN2NiNWNiYmMwZjRhYjI4ZWSfzRk6: --dhchap-ctrl-secret DHHC-1:02:NGY2NjE5NjA4YjA2Y2U2ZmI0YTg4MzgwMmJlMTUxMWE2ODM3YzMyZWI1ZWVjNDc3IVHbOw==: 00:12:14.396 08:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:14.396 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:14.396 08:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:12:14.396 08:25:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.396 08:25:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.396 08:25:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.396 08:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:14.396 08:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:14.396 08:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:14.654 08:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:12:14.654 08:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:14.654 08:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:14.654 08:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:14.654 08:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:14.654 08:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:14.654 08:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:14.654 08:25:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.654 08:25:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.654 08:25:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.654 08:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:14.654 08:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:14.912 00:12:14.912 08:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:14.912 08:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:14.912 08:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:15.479 08:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:15.479 08:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:15.479 08:25:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.479 08:25:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.479 08:25:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.479 08:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:15.479 { 00:12:15.479 "cntlid": 85, 00:12:15.479 "qid": 0, 00:12:15.479 "state": "enabled", 00:12:15.479 "thread": "nvmf_tgt_poll_group_000", 00:12:15.479 "listen_address": { 00:12:15.479 "trtype": "TCP", 00:12:15.479 "adrfam": "IPv4", 00:12:15.479 "traddr": "10.0.0.2", 00:12:15.479 "trsvcid": "4420" 00:12:15.479 }, 00:12:15.479 "peer_address": { 00:12:15.479 "trtype": "TCP", 00:12:15.479 "adrfam": "IPv4", 00:12:15.479 "traddr": "10.0.0.1", 00:12:15.479 "trsvcid": "36896" 00:12:15.479 }, 00:12:15.479 "auth": { 00:12:15.479 "state": "completed", 00:12:15.479 "digest": "sha384", 00:12:15.479 "dhgroup": "ffdhe6144" 00:12:15.479 } 00:12:15.479 } 00:12:15.479 ]' 00:12:15.479 08:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:15.479 08:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:15.479 08:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:15.479 08:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:15.479 08:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:15.479 08:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:15.479 08:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:15.479 08:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:15.737 08:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --hostid cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-secret DHHC-1:02:M2QzYmVhOWVjN2U2NGIzYjkzNzFhN2IzNjNiNzMxZjFlZDZkMDJiMTBkYmNmMGZlM8IJBw==: --dhchap-ctrl-secret DHHC-1:01:ZDliZWJlYmZmOGQ5MDVmM2E5OGQ2ZmJjZDhlYjA3MGT8TygM: 00:12:16.303 08:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:16.561 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:16.561 08:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:12:16.561 08:25:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:16.561 08:25:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.561 08:25:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:16.561 08:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:16.561 08:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:16.561 08:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:16.819 08:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:12:16.819 08:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:16.819 08:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:16.819 08:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:16.819 08:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:16.819 08:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:16.819 08:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-key key3 00:12:16.819 08:25:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:16.819 08:25:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.819 08:25:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:16.819 08:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:16.819 08:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:17.077 00:12:17.077 08:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:17.077 08:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:17.077 08:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:17.334 08:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:17.334 08:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:17.334 08:25:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.334 08:25:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.334 08:25:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.334 08:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:17.334 { 00:12:17.334 "cntlid": 87, 00:12:17.334 "qid": 0, 00:12:17.334 "state": "enabled", 00:12:17.334 "thread": "nvmf_tgt_poll_group_000", 00:12:17.334 "listen_address": { 00:12:17.334 "trtype": "TCP", 00:12:17.334 "adrfam": "IPv4", 00:12:17.334 "traddr": "10.0.0.2", 00:12:17.334 "trsvcid": "4420" 00:12:17.334 }, 00:12:17.334 "peer_address": { 00:12:17.334 "trtype": "TCP", 00:12:17.334 "adrfam": "IPv4", 00:12:17.334 "traddr": "10.0.0.1", 00:12:17.334 "trsvcid": "36924" 00:12:17.334 }, 00:12:17.334 "auth": { 00:12:17.334 "state": "completed", 00:12:17.334 "digest": "sha384", 00:12:17.334 "dhgroup": "ffdhe6144" 00:12:17.334 } 00:12:17.334 } 00:12:17.334 ]' 00:12:17.334 08:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:17.592 08:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:17.592 08:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:17.592 08:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:17.592 08:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:17.592 08:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:17.592 08:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:17.592 08:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:17.850 08:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --hostid cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-secret DHHC-1:03:Y2RhYTVlMWY3ODg2NTI5Mjg3ZTU2YjI3NjhlMWJlMjg5NmU5OWRjYzRiMWVlNTBjMDU5YzhmM2Q4YmE2YmFlNKIyA44=: 00:12:18.416 08:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:18.416 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:18.416 08:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:12:18.416 08:25:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.416 08:25:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.416 08:25:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.416 08:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:18.416 08:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:18.416 08:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:18.416 08:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:18.731 08:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:12:18.731 08:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:18.731 08:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:18.731 08:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:18.731 08:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:18.731 08:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:18.732 08:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:18.732 08:25:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.732 08:25:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.732 08:25:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.732 08:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:18.732 08:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:19.298 00:12:19.298 08:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:19.298 08:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:19.298 08:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:19.557 08:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:19.557 08:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:19.557 08:25:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.557 08:25:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.557 08:25:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.557 08:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:19.557 { 00:12:19.557 "cntlid": 89, 00:12:19.557 "qid": 0, 00:12:19.557 "state": "enabled", 00:12:19.557 "thread": "nvmf_tgt_poll_group_000", 00:12:19.557 "listen_address": { 00:12:19.557 "trtype": "TCP", 00:12:19.557 "adrfam": "IPv4", 00:12:19.557 "traddr": "10.0.0.2", 00:12:19.557 "trsvcid": "4420" 00:12:19.557 }, 00:12:19.557 "peer_address": { 00:12:19.557 "trtype": "TCP", 00:12:19.557 "adrfam": "IPv4", 00:12:19.557 "traddr": "10.0.0.1", 00:12:19.557 "trsvcid": "36956" 00:12:19.557 }, 00:12:19.557 "auth": { 00:12:19.557 "state": "completed", 00:12:19.557 "digest": "sha384", 00:12:19.557 "dhgroup": "ffdhe8192" 00:12:19.557 } 00:12:19.557 } 00:12:19.557 ]' 00:12:19.557 08:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:19.557 08:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:19.557 08:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:19.815 08:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:19.815 08:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:19.815 08:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:19.815 08:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:19.815 08:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:20.073 08:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --hostid cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-secret DHHC-1:00:YjNhMmEzYTEwZTYyNTM2OTgyMDZhMzA3NTE2NTY1ZjQ2MDhhMmE0MmMzZTEzYmU2Afx2NA==: --dhchap-ctrl-secret DHHC-1:03:MWRiMzU0MGExOWRhZGFlMWQwOWI2MTE1YzEzNzU4MWRkOTZiM2JjNmE5ZWI4NDgyZDQwOTljYjNjMWRlNmZiMw6pYio=: 00:12:20.640 08:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:20.640 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:20.640 08:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:12:20.640 08:25:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.640 08:25:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.640 08:25:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.640 08:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:20.640 08:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:20.640 08:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:20.899 08:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:12:20.899 08:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:20.899 08:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:20.899 08:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:20.899 08:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:20.899 08:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:20.899 08:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:20.899 08:25:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.899 08:25:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.899 08:25:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.899 08:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:20.899 08:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:21.464 00:12:21.464 08:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:21.464 08:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:21.464 08:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:21.723 08:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:21.723 08:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:21.723 08:25:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.723 08:25:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.723 08:25:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:21.723 08:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:21.723 { 00:12:21.723 "cntlid": 91, 00:12:21.723 "qid": 0, 00:12:21.723 "state": "enabled", 00:12:21.723 "thread": "nvmf_tgt_poll_group_000", 00:12:21.723 "listen_address": { 00:12:21.723 "trtype": "TCP", 00:12:21.723 "adrfam": "IPv4", 00:12:21.723 "traddr": "10.0.0.2", 00:12:21.723 "trsvcid": "4420" 00:12:21.723 }, 00:12:21.723 "peer_address": { 00:12:21.723 "trtype": "TCP", 00:12:21.723 "adrfam": "IPv4", 00:12:21.723 "traddr": "10.0.0.1", 00:12:21.723 "trsvcid": "36990" 00:12:21.723 }, 00:12:21.723 "auth": { 00:12:21.723 "state": "completed", 00:12:21.723 "digest": "sha384", 00:12:21.723 "dhgroup": "ffdhe8192" 00:12:21.723 } 00:12:21.723 } 00:12:21.723 ]' 00:12:21.723 08:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:21.980 08:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:21.980 08:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:21.980 08:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:21.980 08:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:21.980 08:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:21.980 08:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:21.980 08:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:22.238 08:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --hostid cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-secret DHHC-1:01:ODI5ZGZkYTdjMzE4NGMwN2NiNWNiYmMwZjRhYjI4ZWSfzRk6: --dhchap-ctrl-secret DHHC-1:02:NGY2NjE5NjA4YjA2Y2U2ZmI0YTg4MzgwMmJlMTUxMWE2ODM3YzMyZWI1ZWVjNDc3IVHbOw==: 00:12:23.175 08:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:23.175 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:23.175 08:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:12:23.175 08:25:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:23.175 08:25:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.175 08:25:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:23.175 08:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:23.175 08:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:23.175 08:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:23.175 08:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:12:23.175 08:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:23.175 08:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:23.175 08:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:23.175 08:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:23.175 08:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:23.175 08:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:23.175 08:25:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:23.175 08:25:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.175 08:25:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:23.175 08:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:23.175 08:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:24.110 00:12:24.110 08:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:24.110 08:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:24.110 08:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:24.110 08:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:24.110 08:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:24.110 08:25:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.110 08:25:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.110 08:25:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.110 08:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:24.110 { 00:12:24.110 "cntlid": 93, 00:12:24.110 "qid": 0, 00:12:24.110 "state": "enabled", 00:12:24.110 "thread": "nvmf_tgt_poll_group_000", 00:12:24.110 "listen_address": { 00:12:24.110 "trtype": "TCP", 00:12:24.110 "adrfam": "IPv4", 00:12:24.110 "traddr": "10.0.0.2", 00:12:24.110 "trsvcid": "4420" 00:12:24.110 }, 00:12:24.110 "peer_address": { 00:12:24.110 "trtype": "TCP", 00:12:24.110 "adrfam": "IPv4", 00:12:24.110 "traddr": "10.0.0.1", 00:12:24.110 "trsvcid": "37024" 00:12:24.111 }, 00:12:24.111 "auth": { 00:12:24.111 "state": "completed", 00:12:24.111 "digest": "sha384", 00:12:24.111 "dhgroup": "ffdhe8192" 00:12:24.111 } 00:12:24.111 } 00:12:24.111 ]' 00:12:24.369 08:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:24.369 08:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:24.369 08:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:24.369 08:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:24.369 08:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:24.369 08:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:24.369 08:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:24.369 08:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:24.628 08:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --hostid cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-secret DHHC-1:02:M2QzYmVhOWVjN2U2NGIzYjkzNzFhN2IzNjNiNzMxZjFlZDZkMDJiMTBkYmNmMGZlM8IJBw==: --dhchap-ctrl-secret DHHC-1:01:ZDliZWJlYmZmOGQ5MDVmM2E5OGQ2ZmJjZDhlYjA3MGT8TygM: 00:12:25.563 08:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:25.563 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:25.563 08:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:12:25.563 08:25:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.563 08:25:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.563 08:25:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.563 08:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:25.563 08:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:25.563 08:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:25.563 08:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:12:25.563 08:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:25.563 08:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:25.563 08:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:25.563 08:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:25.563 08:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:25.563 08:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-key key3 00:12:25.563 08:25:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.563 08:25:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.563 08:25:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.563 08:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:25.563 08:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:26.129 00:12:26.129 08:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:26.129 08:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:26.129 08:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:26.696 08:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:26.696 08:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:26.696 08:25:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.696 08:25:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.697 08:25:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.697 08:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:26.697 { 00:12:26.697 "cntlid": 95, 00:12:26.697 "qid": 0, 00:12:26.697 "state": "enabled", 00:12:26.697 "thread": "nvmf_tgt_poll_group_000", 00:12:26.697 "listen_address": { 00:12:26.697 "trtype": "TCP", 00:12:26.697 "adrfam": "IPv4", 00:12:26.697 "traddr": "10.0.0.2", 00:12:26.697 "trsvcid": "4420" 00:12:26.697 }, 00:12:26.697 "peer_address": { 00:12:26.697 "trtype": "TCP", 00:12:26.697 "adrfam": "IPv4", 00:12:26.697 "traddr": "10.0.0.1", 00:12:26.697 "trsvcid": "36932" 00:12:26.697 }, 00:12:26.697 "auth": { 00:12:26.697 "state": "completed", 00:12:26.697 "digest": "sha384", 00:12:26.697 "dhgroup": "ffdhe8192" 00:12:26.697 } 00:12:26.697 } 00:12:26.697 ]' 00:12:26.697 08:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:26.697 08:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:26.697 08:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:26.697 08:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:26.697 08:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:26.697 08:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:26.697 08:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:26.697 08:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:26.978 08:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --hostid cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-secret DHHC-1:03:Y2RhYTVlMWY3ODg2NTI5Mjg3ZTU2YjI3NjhlMWJlMjg5NmU5OWRjYzRiMWVlNTBjMDU5YzhmM2Q4YmE2YmFlNKIyA44=: 00:12:27.915 08:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:27.915 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:27.915 08:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:12:27.915 08:25:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.915 08:25:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.915 08:25:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.915 08:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:12:27.915 08:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:27.915 08:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:27.915 08:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:27.915 08:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:27.915 08:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:12:27.915 08:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:27.915 08:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:27.915 08:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:27.915 08:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:27.915 08:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:27.915 08:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:27.915 08:25:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.915 08:25:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.915 08:25:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.915 08:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:27.915 08:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:28.174 00:12:28.174 08:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:28.174 08:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:28.174 08:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:28.433 08:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:28.433 08:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:28.433 08:25:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:28.433 08:25:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.433 08:25:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:28.433 08:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:28.433 { 00:12:28.433 "cntlid": 97, 00:12:28.433 "qid": 0, 00:12:28.433 "state": "enabled", 00:12:28.433 "thread": "nvmf_tgt_poll_group_000", 00:12:28.433 "listen_address": { 00:12:28.433 "trtype": "TCP", 00:12:28.433 "adrfam": "IPv4", 00:12:28.433 "traddr": "10.0.0.2", 00:12:28.433 "trsvcid": "4420" 00:12:28.433 }, 00:12:28.433 "peer_address": { 00:12:28.433 "trtype": "TCP", 00:12:28.433 "adrfam": "IPv4", 00:12:28.433 "traddr": "10.0.0.1", 00:12:28.433 "trsvcid": "36970" 00:12:28.433 }, 00:12:28.433 "auth": { 00:12:28.433 "state": "completed", 00:12:28.433 "digest": "sha512", 00:12:28.433 "dhgroup": "null" 00:12:28.433 } 00:12:28.433 } 00:12:28.433 ]' 00:12:28.433 08:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:28.692 08:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:28.692 08:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:28.692 08:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:28.692 08:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:28.692 08:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:28.692 08:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:28.692 08:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:28.951 08:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --hostid cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-secret DHHC-1:00:YjNhMmEzYTEwZTYyNTM2OTgyMDZhMzA3NTE2NTY1ZjQ2MDhhMmE0MmMzZTEzYmU2Afx2NA==: --dhchap-ctrl-secret DHHC-1:03:MWRiMzU0MGExOWRhZGFlMWQwOWI2MTE1YzEzNzU4MWRkOTZiM2JjNmE5ZWI4NDgyZDQwOTljYjNjMWRlNmZiMw6pYio=: 00:12:29.885 08:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:29.885 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:29.885 08:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:12:29.885 08:25:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.885 08:25:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.885 08:25:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.885 08:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:29.885 08:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:29.885 08:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:29.885 08:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:12:29.885 08:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:29.885 08:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:29.885 08:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:29.885 08:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:29.885 08:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:29.885 08:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:29.885 08:25:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.885 08:25:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.885 08:25:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.885 08:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:29.885 08:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:30.452 00:12:30.452 08:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:30.452 08:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:30.452 08:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:30.711 08:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:30.711 08:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:30.711 08:25:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.711 08:25:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.711 08:25:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:30.711 08:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:30.711 { 00:12:30.711 "cntlid": 99, 00:12:30.711 "qid": 0, 00:12:30.711 "state": "enabled", 00:12:30.711 "thread": "nvmf_tgt_poll_group_000", 00:12:30.711 "listen_address": { 00:12:30.711 "trtype": "TCP", 00:12:30.711 "adrfam": "IPv4", 00:12:30.711 "traddr": "10.0.0.2", 00:12:30.711 "trsvcid": "4420" 00:12:30.711 }, 00:12:30.711 "peer_address": { 00:12:30.711 "trtype": "TCP", 00:12:30.711 "adrfam": "IPv4", 00:12:30.711 "traddr": "10.0.0.1", 00:12:30.711 "trsvcid": "36984" 00:12:30.711 }, 00:12:30.711 "auth": { 00:12:30.711 "state": "completed", 00:12:30.711 "digest": "sha512", 00:12:30.711 "dhgroup": "null" 00:12:30.711 } 00:12:30.711 } 00:12:30.711 ]' 00:12:30.711 08:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:30.711 08:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:30.711 08:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:30.711 08:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:30.711 08:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:30.711 08:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:30.711 08:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:30.711 08:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:30.970 08:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --hostid cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-secret DHHC-1:01:ODI5ZGZkYTdjMzE4NGMwN2NiNWNiYmMwZjRhYjI4ZWSfzRk6: --dhchap-ctrl-secret DHHC-1:02:NGY2NjE5NjA4YjA2Y2U2ZmI0YTg4MzgwMmJlMTUxMWE2ODM3YzMyZWI1ZWVjNDc3IVHbOw==: 00:12:31.903 08:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:31.903 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:31.903 08:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:12:31.903 08:25:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.903 08:25:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.903 08:25:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.903 08:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:31.903 08:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:31.903 08:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:31.903 08:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:12:31.903 08:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:31.903 08:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:31.903 08:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:31.903 08:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:31.903 08:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:31.903 08:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:31.903 08:25:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.903 08:25:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.903 08:25:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.904 08:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:31.904 08:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:32.162 00:12:32.162 08:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:32.162 08:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:32.162 08:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:32.754 08:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:32.754 08:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:32.754 08:25:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.754 08:25:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.754 08:25:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.754 08:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:32.754 { 00:12:32.754 "cntlid": 101, 00:12:32.754 "qid": 0, 00:12:32.754 "state": "enabled", 00:12:32.754 "thread": "nvmf_tgt_poll_group_000", 00:12:32.754 "listen_address": { 00:12:32.754 "trtype": "TCP", 00:12:32.754 "adrfam": "IPv4", 00:12:32.754 "traddr": "10.0.0.2", 00:12:32.754 "trsvcid": "4420" 00:12:32.754 }, 00:12:32.754 "peer_address": { 00:12:32.754 "trtype": "TCP", 00:12:32.754 "adrfam": "IPv4", 00:12:32.754 "traddr": "10.0.0.1", 00:12:32.754 "trsvcid": "37004" 00:12:32.754 }, 00:12:32.754 "auth": { 00:12:32.754 "state": "completed", 00:12:32.754 "digest": "sha512", 00:12:32.754 "dhgroup": "null" 00:12:32.754 } 00:12:32.754 } 00:12:32.754 ]' 00:12:32.754 08:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:32.754 08:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:32.754 08:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:32.754 08:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:32.754 08:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:32.754 08:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:32.754 08:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:32.754 08:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:33.015 08:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --hostid cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-secret DHHC-1:02:M2QzYmVhOWVjN2U2NGIzYjkzNzFhN2IzNjNiNzMxZjFlZDZkMDJiMTBkYmNmMGZlM8IJBw==: --dhchap-ctrl-secret DHHC-1:01:ZDliZWJlYmZmOGQ5MDVmM2E5OGQ2ZmJjZDhlYjA3MGT8TygM: 00:12:33.594 08:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:33.594 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:33.594 08:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:12:33.594 08:25:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.594 08:25:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.594 08:25:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.594 08:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:33.594 08:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:33.594 08:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:33.864 08:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:12:33.864 08:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:33.864 08:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:33.864 08:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:33.864 08:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:33.864 08:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:33.864 08:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-key key3 00:12:33.864 08:25:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.864 08:25:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.864 08:25:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.864 08:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:33.864 08:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:34.135 00:12:34.408 08:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:34.408 08:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:34.408 08:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:34.408 08:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:34.408 08:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:34.408 08:25:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.408 08:25:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.408 08:25:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.408 08:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:34.408 { 00:12:34.408 "cntlid": 103, 00:12:34.408 "qid": 0, 00:12:34.408 "state": "enabled", 00:12:34.408 "thread": "nvmf_tgt_poll_group_000", 00:12:34.408 "listen_address": { 00:12:34.408 "trtype": "TCP", 00:12:34.408 "adrfam": "IPv4", 00:12:34.408 "traddr": "10.0.0.2", 00:12:34.408 "trsvcid": "4420" 00:12:34.408 }, 00:12:34.408 "peer_address": { 00:12:34.408 "trtype": "TCP", 00:12:34.408 "adrfam": "IPv4", 00:12:34.408 "traddr": "10.0.0.1", 00:12:34.408 "trsvcid": "50860" 00:12:34.408 }, 00:12:34.408 "auth": { 00:12:34.408 "state": "completed", 00:12:34.408 "digest": "sha512", 00:12:34.408 "dhgroup": "null" 00:12:34.408 } 00:12:34.408 } 00:12:34.408 ]' 00:12:34.408 08:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:34.670 08:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:34.670 08:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:34.670 08:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:34.670 08:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:34.670 08:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:34.670 08:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:34.670 08:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:34.928 08:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --hostid cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-secret DHHC-1:03:Y2RhYTVlMWY3ODg2NTI5Mjg3ZTU2YjI3NjhlMWJlMjg5NmU5OWRjYzRiMWVlNTBjMDU5YzhmM2Q4YmE2YmFlNKIyA44=: 00:12:35.548 08:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:35.548 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:35.548 08:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:12:35.548 08:25:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.548 08:25:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.548 08:25:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.548 08:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:35.548 08:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:35.548 08:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:35.548 08:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:35.804 08:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:12:35.804 08:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:35.804 08:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:35.804 08:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:35.804 08:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:35.804 08:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:35.804 08:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:35.804 08:25:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.804 08:25:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.804 08:25:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.804 08:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:35.805 08:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:36.061 00:12:36.061 08:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:36.061 08:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:36.061 08:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:36.317 08:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:36.318 08:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:36.318 08:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.318 08:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.318 08:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.318 08:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:36.318 { 00:12:36.318 "cntlid": 105, 00:12:36.318 "qid": 0, 00:12:36.318 "state": "enabled", 00:12:36.318 "thread": "nvmf_tgt_poll_group_000", 00:12:36.318 "listen_address": { 00:12:36.318 "trtype": "TCP", 00:12:36.318 "adrfam": "IPv4", 00:12:36.318 "traddr": "10.0.0.2", 00:12:36.318 "trsvcid": "4420" 00:12:36.318 }, 00:12:36.318 "peer_address": { 00:12:36.318 "trtype": "TCP", 00:12:36.318 "adrfam": "IPv4", 00:12:36.318 "traddr": "10.0.0.1", 00:12:36.318 "trsvcid": "50880" 00:12:36.318 }, 00:12:36.318 "auth": { 00:12:36.318 "state": "completed", 00:12:36.318 "digest": "sha512", 00:12:36.318 "dhgroup": "ffdhe2048" 00:12:36.318 } 00:12:36.318 } 00:12:36.318 ]' 00:12:36.318 08:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:36.575 08:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:36.575 08:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:36.575 08:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:36.575 08:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:36.575 08:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:36.575 08:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:36.575 08:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:36.891 08:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --hostid cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-secret DHHC-1:00:YjNhMmEzYTEwZTYyNTM2OTgyMDZhMzA3NTE2NTY1ZjQ2MDhhMmE0MmMzZTEzYmU2Afx2NA==: --dhchap-ctrl-secret DHHC-1:03:MWRiMzU0MGExOWRhZGFlMWQwOWI2MTE1YzEzNzU4MWRkOTZiM2JjNmE5ZWI4NDgyZDQwOTljYjNjMWRlNmZiMw6pYio=: 00:12:37.470 08:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:37.470 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:37.470 08:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:12:37.470 08:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.470 08:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.470 08:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.470 08:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:37.470 08:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:37.470 08:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:37.729 08:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:12:37.729 08:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:37.729 08:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:37.729 08:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:37.729 08:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:37.729 08:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:37.729 08:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:37.729 08:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.729 08:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.729 08:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.729 08:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:37.729 08:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:37.988 00:12:37.988 08:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:37.988 08:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:37.988 08:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:38.555 08:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:38.555 08:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:38.555 08:25:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.555 08:25:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.555 08:25:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.555 08:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:38.555 { 00:12:38.555 "cntlid": 107, 00:12:38.555 "qid": 0, 00:12:38.555 "state": "enabled", 00:12:38.555 "thread": "nvmf_tgt_poll_group_000", 00:12:38.555 "listen_address": { 00:12:38.555 "trtype": "TCP", 00:12:38.555 "adrfam": "IPv4", 00:12:38.555 "traddr": "10.0.0.2", 00:12:38.555 "trsvcid": "4420" 00:12:38.555 }, 00:12:38.555 "peer_address": { 00:12:38.555 "trtype": "TCP", 00:12:38.555 "adrfam": "IPv4", 00:12:38.555 "traddr": "10.0.0.1", 00:12:38.555 "trsvcid": "50906" 00:12:38.555 }, 00:12:38.555 "auth": { 00:12:38.555 "state": "completed", 00:12:38.555 "digest": "sha512", 00:12:38.555 "dhgroup": "ffdhe2048" 00:12:38.555 } 00:12:38.555 } 00:12:38.555 ]' 00:12:38.555 08:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:38.555 08:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:38.555 08:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:38.555 08:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:38.555 08:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:38.555 08:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:38.555 08:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:38.555 08:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:38.814 08:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --hostid cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-secret DHHC-1:01:ODI5ZGZkYTdjMzE4NGMwN2NiNWNiYmMwZjRhYjI4ZWSfzRk6: --dhchap-ctrl-secret DHHC-1:02:NGY2NjE5NjA4YjA2Y2U2ZmI0YTg4MzgwMmJlMTUxMWE2ODM3YzMyZWI1ZWVjNDc3IVHbOw==: 00:12:39.380 08:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:39.380 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:39.380 08:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:12:39.380 08:25:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.380 08:25:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.380 08:25:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.380 08:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:39.380 08:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:39.380 08:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:39.638 08:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:12:39.638 08:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:39.638 08:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:39.638 08:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:39.638 08:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:39.638 08:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:39.638 08:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:39.638 08:25:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.638 08:25:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.638 08:25:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.638 08:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:39.638 08:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:39.895 00:12:40.153 08:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:40.153 08:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:40.153 08:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:40.411 08:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:40.411 08:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:40.411 08:25:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.411 08:25:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.411 08:25:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.411 08:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:40.411 { 00:12:40.411 "cntlid": 109, 00:12:40.411 "qid": 0, 00:12:40.411 "state": "enabled", 00:12:40.411 "thread": "nvmf_tgt_poll_group_000", 00:12:40.411 "listen_address": { 00:12:40.411 "trtype": "TCP", 00:12:40.411 "adrfam": "IPv4", 00:12:40.411 "traddr": "10.0.0.2", 00:12:40.411 "trsvcid": "4420" 00:12:40.411 }, 00:12:40.411 "peer_address": { 00:12:40.411 "trtype": "TCP", 00:12:40.411 "adrfam": "IPv4", 00:12:40.411 "traddr": "10.0.0.1", 00:12:40.411 "trsvcid": "50936" 00:12:40.411 }, 00:12:40.411 "auth": { 00:12:40.411 "state": "completed", 00:12:40.411 "digest": "sha512", 00:12:40.411 "dhgroup": "ffdhe2048" 00:12:40.411 } 00:12:40.411 } 00:12:40.411 ]' 00:12:40.411 08:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:40.411 08:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:40.411 08:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:40.411 08:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:40.411 08:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:40.411 08:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:40.411 08:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:40.411 08:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:40.669 08:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --hostid cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-secret DHHC-1:02:M2QzYmVhOWVjN2U2NGIzYjkzNzFhN2IzNjNiNzMxZjFlZDZkMDJiMTBkYmNmMGZlM8IJBw==: --dhchap-ctrl-secret DHHC-1:01:ZDliZWJlYmZmOGQ5MDVmM2E5OGQ2ZmJjZDhlYjA3MGT8TygM: 00:12:41.606 08:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:41.606 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:41.606 08:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:12:41.606 08:25:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.606 08:25:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.606 08:25:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.606 08:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:41.606 08:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:41.606 08:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:41.875 08:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:12:41.875 08:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:41.875 08:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:41.875 08:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:41.875 08:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:41.875 08:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:41.875 08:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-key key3 00:12:41.875 08:25:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.875 08:25:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.875 08:25:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.875 08:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:41.875 08:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:42.132 00:12:42.132 08:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:42.132 08:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:42.132 08:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:42.390 08:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:42.390 08:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:42.390 08:25:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.390 08:25:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.390 08:25:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.390 08:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:42.390 { 00:12:42.390 "cntlid": 111, 00:12:42.390 "qid": 0, 00:12:42.390 "state": "enabled", 00:12:42.390 "thread": "nvmf_tgt_poll_group_000", 00:12:42.390 "listen_address": { 00:12:42.390 "trtype": "TCP", 00:12:42.390 "adrfam": "IPv4", 00:12:42.390 "traddr": "10.0.0.2", 00:12:42.390 "trsvcid": "4420" 00:12:42.390 }, 00:12:42.390 "peer_address": { 00:12:42.390 "trtype": "TCP", 00:12:42.390 "adrfam": "IPv4", 00:12:42.390 "traddr": "10.0.0.1", 00:12:42.390 "trsvcid": "50980" 00:12:42.390 }, 00:12:42.390 "auth": { 00:12:42.390 "state": "completed", 00:12:42.390 "digest": "sha512", 00:12:42.390 "dhgroup": "ffdhe2048" 00:12:42.390 } 00:12:42.390 } 00:12:42.390 ]' 00:12:42.390 08:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:42.390 08:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:42.390 08:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:42.391 08:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:42.391 08:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:42.649 08:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:42.649 08:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:42.649 08:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:42.906 08:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --hostid cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-secret DHHC-1:03:Y2RhYTVlMWY3ODg2NTI5Mjg3ZTU2YjI3NjhlMWJlMjg5NmU5OWRjYzRiMWVlNTBjMDU5YzhmM2Q4YmE2YmFlNKIyA44=: 00:12:43.469 08:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:43.469 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:43.469 08:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:12:43.469 08:25:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.469 08:25:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.469 08:25:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.469 08:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:43.469 08:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:43.469 08:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:43.469 08:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:43.726 08:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:12:43.726 08:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:43.726 08:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:43.726 08:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:43.726 08:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:43.726 08:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:43.726 08:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:43.726 08:25:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.726 08:25:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.726 08:25:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.726 08:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:43.726 08:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:43.983 00:12:43.983 08:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:43.983 08:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:43.983 08:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:44.546 08:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:44.546 08:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:44.546 08:25:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.546 08:25:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.546 08:25:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.546 08:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:44.546 { 00:12:44.546 "cntlid": 113, 00:12:44.546 "qid": 0, 00:12:44.546 "state": "enabled", 00:12:44.546 "thread": "nvmf_tgt_poll_group_000", 00:12:44.546 "listen_address": { 00:12:44.546 "trtype": "TCP", 00:12:44.546 "adrfam": "IPv4", 00:12:44.546 "traddr": "10.0.0.2", 00:12:44.546 "trsvcid": "4420" 00:12:44.546 }, 00:12:44.546 "peer_address": { 00:12:44.546 "trtype": "TCP", 00:12:44.546 "adrfam": "IPv4", 00:12:44.546 "traddr": "10.0.0.1", 00:12:44.546 "trsvcid": "45140" 00:12:44.546 }, 00:12:44.546 "auth": { 00:12:44.546 "state": "completed", 00:12:44.546 "digest": "sha512", 00:12:44.546 "dhgroup": "ffdhe3072" 00:12:44.546 } 00:12:44.546 } 00:12:44.546 ]' 00:12:44.546 08:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:44.546 08:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:44.546 08:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:44.546 08:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:44.546 08:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:44.546 08:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:44.546 08:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:44.546 08:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:44.803 08:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --hostid cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-secret DHHC-1:00:YjNhMmEzYTEwZTYyNTM2OTgyMDZhMzA3NTE2NTY1ZjQ2MDhhMmE0MmMzZTEzYmU2Afx2NA==: --dhchap-ctrl-secret DHHC-1:03:MWRiMzU0MGExOWRhZGFlMWQwOWI2MTE1YzEzNzU4MWRkOTZiM2JjNmE5ZWI4NDgyZDQwOTljYjNjMWRlNmZiMw6pYio=: 00:12:45.769 08:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:45.769 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:45.769 08:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:12:45.769 08:25:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:45.769 08:25:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.769 08:25:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:45.769 08:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:45.769 08:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:45.770 08:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:45.770 08:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:12:45.770 08:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:45.770 08:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:45.770 08:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:45.770 08:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:45.770 08:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:45.770 08:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:45.770 08:25:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:45.770 08:25:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.770 08:25:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:45.770 08:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:45.770 08:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:46.026 00:12:46.026 08:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:46.026 08:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:46.026 08:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:46.285 08:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:46.285 08:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:46.285 08:25:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.285 08:25:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.285 08:25:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.285 08:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:46.285 { 00:12:46.285 "cntlid": 115, 00:12:46.285 "qid": 0, 00:12:46.285 "state": "enabled", 00:12:46.285 "thread": "nvmf_tgt_poll_group_000", 00:12:46.285 "listen_address": { 00:12:46.285 "trtype": "TCP", 00:12:46.285 "adrfam": "IPv4", 00:12:46.285 "traddr": "10.0.0.2", 00:12:46.285 "trsvcid": "4420" 00:12:46.285 }, 00:12:46.285 "peer_address": { 00:12:46.285 "trtype": "TCP", 00:12:46.285 "adrfam": "IPv4", 00:12:46.285 "traddr": "10.0.0.1", 00:12:46.285 "trsvcid": "45168" 00:12:46.285 }, 00:12:46.285 "auth": { 00:12:46.285 "state": "completed", 00:12:46.285 "digest": "sha512", 00:12:46.285 "dhgroup": "ffdhe3072" 00:12:46.285 } 00:12:46.285 } 00:12:46.285 ]' 00:12:46.285 08:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:46.285 08:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:46.285 08:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:46.599 08:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:46.599 08:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:46.599 08:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:46.599 08:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:46.599 08:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:46.857 08:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --hostid cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-secret DHHC-1:01:ODI5ZGZkYTdjMzE4NGMwN2NiNWNiYmMwZjRhYjI4ZWSfzRk6: --dhchap-ctrl-secret DHHC-1:02:NGY2NjE5NjA4YjA2Y2U2ZmI0YTg4MzgwMmJlMTUxMWE2ODM3YzMyZWI1ZWVjNDc3IVHbOw==: 00:12:47.422 08:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:47.422 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:47.422 08:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:12:47.422 08:25:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.422 08:25:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.422 08:25:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.422 08:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:47.422 08:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:47.422 08:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:47.681 08:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:12:47.681 08:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:47.681 08:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:47.681 08:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:47.681 08:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:47.681 08:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:47.681 08:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:47.681 08:25:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.681 08:25:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.681 08:25:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.681 08:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:47.681 08:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:47.939 00:12:47.939 08:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:47.939 08:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:47.939 08:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:48.197 08:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:48.197 08:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:48.197 08:25:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.197 08:25:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.197 08:25:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.197 08:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:48.197 { 00:12:48.197 "cntlid": 117, 00:12:48.197 "qid": 0, 00:12:48.197 "state": "enabled", 00:12:48.197 "thread": "nvmf_tgt_poll_group_000", 00:12:48.197 "listen_address": { 00:12:48.197 "trtype": "TCP", 00:12:48.197 "adrfam": "IPv4", 00:12:48.197 "traddr": "10.0.0.2", 00:12:48.197 "trsvcid": "4420" 00:12:48.197 }, 00:12:48.197 "peer_address": { 00:12:48.197 "trtype": "TCP", 00:12:48.197 "adrfam": "IPv4", 00:12:48.197 "traddr": "10.0.0.1", 00:12:48.197 "trsvcid": "45200" 00:12:48.197 }, 00:12:48.197 "auth": { 00:12:48.197 "state": "completed", 00:12:48.197 "digest": "sha512", 00:12:48.197 "dhgroup": "ffdhe3072" 00:12:48.197 } 00:12:48.197 } 00:12:48.197 ]' 00:12:48.197 08:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:48.197 08:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:48.197 08:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:48.454 08:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:48.454 08:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:48.454 08:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:48.454 08:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:48.454 08:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:48.712 08:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --hostid cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-secret DHHC-1:02:M2QzYmVhOWVjN2U2NGIzYjkzNzFhN2IzNjNiNzMxZjFlZDZkMDJiMTBkYmNmMGZlM8IJBw==: --dhchap-ctrl-secret DHHC-1:01:ZDliZWJlYmZmOGQ5MDVmM2E5OGQ2ZmJjZDhlYjA3MGT8TygM: 00:12:49.645 08:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:49.645 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:49.645 08:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:12:49.645 08:25:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.645 08:25:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.645 08:25:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.645 08:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:49.645 08:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:49.645 08:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:49.645 08:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:12:49.645 08:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:49.645 08:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:49.645 08:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:49.645 08:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:49.645 08:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:49.645 08:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-key key3 00:12:49.645 08:25:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.645 08:25:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.645 08:25:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.645 08:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:49.645 08:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:49.969 00:12:49.969 08:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:49.969 08:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:49.969 08:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:50.227 08:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:50.227 08:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:50.227 08:25:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:50.227 08:25:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.227 08:25:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:50.227 08:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:50.227 { 00:12:50.227 "cntlid": 119, 00:12:50.227 "qid": 0, 00:12:50.227 "state": "enabled", 00:12:50.227 "thread": "nvmf_tgt_poll_group_000", 00:12:50.227 "listen_address": { 00:12:50.227 "trtype": "TCP", 00:12:50.227 "adrfam": "IPv4", 00:12:50.227 "traddr": "10.0.0.2", 00:12:50.227 "trsvcid": "4420" 00:12:50.227 }, 00:12:50.227 "peer_address": { 00:12:50.227 "trtype": "TCP", 00:12:50.227 "adrfam": "IPv4", 00:12:50.227 "traddr": "10.0.0.1", 00:12:50.227 "trsvcid": "45234" 00:12:50.227 }, 00:12:50.227 "auth": { 00:12:50.227 "state": "completed", 00:12:50.227 "digest": "sha512", 00:12:50.227 "dhgroup": "ffdhe3072" 00:12:50.227 } 00:12:50.227 } 00:12:50.227 ]' 00:12:50.227 08:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:50.486 08:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:50.486 08:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:50.486 08:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:50.486 08:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:50.486 08:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:50.486 08:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:50.486 08:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:50.745 08:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --hostid cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-secret DHHC-1:03:Y2RhYTVlMWY3ODg2NTI5Mjg3ZTU2YjI3NjhlMWJlMjg5NmU5OWRjYzRiMWVlNTBjMDU5YzhmM2Q4YmE2YmFlNKIyA44=: 00:12:51.311 08:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:51.311 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:51.311 08:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:12:51.311 08:25:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.311 08:25:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.311 08:25:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.311 08:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:51.311 08:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:51.311 08:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:51.311 08:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:51.569 08:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:12:51.569 08:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:51.569 08:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:51.569 08:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:51.569 08:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:51.569 08:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:51.569 08:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:51.569 08:25:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.569 08:25:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.569 08:25:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.569 08:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:51.569 08:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:52.133 00:12:52.133 08:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:52.133 08:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:52.133 08:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:52.133 08:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:52.133 08:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:52.133 08:25:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.133 08:25:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.133 08:25:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.133 08:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:52.133 { 00:12:52.133 "cntlid": 121, 00:12:52.133 "qid": 0, 00:12:52.133 "state": "enabled", 00:12:52.133 "thread": "nvmf_tgt_poll_group_000", 00:12:52.133 "listen_address": { 00:12:52.133 "trtype": "TCP", 00:12:52.133 "adrfam": "IPv4", 00:12:52.133 "traddr": "10.0.0.2", 00:12:52.133 "trsvcid": "4420" 00:12:52.133 }, 00:12:52.133 "peer_address": { 00:12:52.133 "trtype": "TCP", 00:12:52.133 "adrfam": "IPv4", 00:12:52.133 "traddr": "10.0.0.1", 00:12:52.133 "trsvcid": "45258" 00:12:52.133 }, 00:12:52.133 "auth": { 00:12:52.133 "state": "completed", 00:12:52.133 "digest": "sha512", 00:12:52.133 "dhgroup": "ffdhe4096" 00:12:52.133 } 00:12:52.133 } 00:12:52.133 ]' 00:12:52.391 08:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:52.391 08:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:52.391 08:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:52.391 08:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:52.391 08:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:52.391 08:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:52.391 08:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:52.391 08:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:52.648 08:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --hostid cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-secret DHHC-1:00:YjNhMmEzYTEwZTYyNTM2OTgyMDZhMzA3NTE2NTY1ZjQ2MDhhMmE0MmMzZTEzYmU2Afx2NA==: --dhchap-ctrl-secret DHHC-1:03:MWRiMzU0MGExOWRhZGFlMWQwOWI2MTE1YzEzNzU4MWRkOTZiM2JjNmE5ZWI4NDgyZDQwOTljYjNjMWRlNmZiMw6pYio=: 00:12:53.588 08:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:53.588 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:53.588 08:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:12:53.588 08:25:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.588 08:25:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.588 08:25:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.588 08:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:53.588 08:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:53.588 08:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:53.588 08:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:12:53.588 08:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:53.588 08:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:53.588 08:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:53.588 08:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:53.588 08:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:53.588 08:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:53.588 08:25:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.588 08:25:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.588 08:25:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.588 08:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:53.588 08:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:54.157 00:12:54.157 08:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:54.157 08:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:54.157 08:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:54.416 08:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:54.416 08:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:54.416 08:25:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.416 08:25:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.416 08:25:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.416 08:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:54.416 { 00:12:54.416 "cntlid": 123, 00:12:54.416 "qid": 0, 00:12:54.416 "state": "enabled", 00:12:54.416 "thread": "nvmf_tgt_poll_group_000", 00:12:54.416 "listen_address": { 00:12:54.416 "trtype": "TCP", 00:12:54.416 "adrfam": "IPv4", 00:12:54.416 "traddr": "10.0.0.2", 00:12:54.416 "trsvcid": "4420" 00:12:54.416 }, 00:12:54.416 "peer_address": { 00:12:54.416 "trtype": "TCP", 00:12:54.416 "adrfam": "IPv4", 00:12:54.416 "traddr": "10.0.0.1", 00:12:54.416 "trsvcid": "45276" 00:12:54.416 }, 00:12:54.416 "auth": { 00:12:54.416 "state": "completed", 00:12:54.416 "digest": "sha512", 00:12:54.416 "dhgroup": "ffdhe4096" 00:12:54.416 } 00:12:54.416 } 00:12:54.416 ]' 00:12:54.416 08:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:54.416 08:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:54.416 08:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:54.416 08:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:54.416 08:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:54.416 08:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:54.416 08:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:54.416 08:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:54.672 08:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --hostid cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-secret DHHC-1:01:ODI5ZGZkYTdjMzE4NGMwN2NiNWNiYmMwZjRhYjI4ZWSfzRk6: --dhchap-ctrl-secret DHHC-1:02:NGY2NjE5NjA4YjA2Y2U2ZmI0YTg4MzgwMmJlMTUxMWE2ODM3YzMyZWI1ZWVjNDc3IVHbOw==: 00:12:55.604 08:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:55.604 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:55.604 08:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:12:55.604 08:25:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.604 08:25:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.604 08:25:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.604 08:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:55.604 08:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:55.604 08:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:55.604 08:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:12:55.604 08:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:55.604 08:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:55.604 08:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:55.604 08:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:55.604 08:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:55.604 08:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:55.604 08:25:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.605 08:25:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.605 08:25:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.605 08:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:55.605 08:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:56.169 00:12:56.169 08:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:56.169 08:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:56.169 08:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:56.427 08:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:56.428 08:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:56.428 08:25:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:56.428 08:25:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.428 08:25:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:56.428 08:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:56.428 { 00:12:56.428 "cntlid": 125, 00:12:56.428 "qid": 0, 00:12:56.428 "state": "enabled", 00:12:56.428 "thread": "nvmf_tgt_poll_group_000", 00:12:56.428 "listen_address": { 00:12:56.428 "trtype": "TCP", 00:12:56.428 "adrfam": "IPv4", 00:12:56.428 "traddr": "10.0.0.2", 00:12:56.428 "trsvcid": "4420" 00:12:56.428 }, 00:12:56.428 "peer_address": { 00:12:56.428 "trtype": "TCP", 00:12:56.428 "adrfam": "IPv4", 00:12:56.428 "traddr": "10.0.0.1", 00:12:56.428 "trsvcid": "45884" 00:12:56.428 }, 00:12:56.428 "auth": { 00:12:56.428 "state": "completed", 00:12:56.428 "digest": "sha512", 00:12:56.428 "dhgroup": "ffdhe4096" 00:12:56.428 } 00:12:56.428 } 00:12:56.428 ]' 00:12:56.428 08:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:56.428 08:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:56.428 08:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:56.428 08:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:56.428 08:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:56.428 08:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:56.428 08:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:56.428 08:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:56.686 08:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --hostid cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-secret DHHC-1:02:M2QzYmVhOWVjN2U2NGIzYjkzNzFhN2IzNjNiNzMxZjFlZDZkMDJiMTBkYmNmMGZlM8IJBw==: --dhchap-ctrl-secret DHHC-1:01:ZDliZWJlYmZmOGQ5MDVmM2E5OGQ2ZmJjZDhlYjA3MGT8TygM: 00:12:57.620 08:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:57.620 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:57.620 08:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:12:57.620 08:25:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.620 08:25:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.620 08:25:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.620 08:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:57.620 08:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:57.620 08:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:57.620 08:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:12:57.620 08:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:57.620 08:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:57.620 08:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:57.620 08:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:57.620 08:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:57.620 08:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-key key3 00:12:57.620 08:25:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.620 08:25:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.620 08:25:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.620 08:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:57.620 08:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:58.215 00:12:58.215 08:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:58.215 08:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:58.215 08:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:58.215 08:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:58.215 08:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:58.215 08:25:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.215 08:25:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.215 08:25:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.215 08:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:58.215 { 00:12:58.215 "cntlid": 127, 00:12:58.215 "qid": 0, 00:12:58.215 "state": "enabled", 00:12:58.215 "thread": "nvmf_tgt_poll_group_000", 00:12:58.215 "listen_address": { 00:12:58.215 "trtype": "TCP", 00:12:58.215 "adrfam": "IPv4", 00:12:58.215 "traddr": "10.0.0.2", 00:12:58.215 "trsvcid": "4420" 00:12:58.215 }, 00:12:58.215 "peer_address": { 00:12:58.215 "trtype": "TCP", 00:12:58.215 "adrfam": "IPv4", 00:12:58.215 "traddr": "10.0.0.1", 00:12:58.215 "trsvcid": "45910" 00:12:58.215 }, 00:12:58.215 "auth": { 00:12:58.215 "state": "completed", 00:12:58.215 "digest": "sha512", 00:12:58.215 "dhgroup": "ffdhe4096" 00:12:58.215 } 00:12:58.215 } 00:12:58.215 ]' 00:12:58.215 08:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:58.472 08:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:58.472 08:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:58.472 08:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:58.473 08:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:58.473 08:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:58.473 08:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:58.473 08:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:58.730 08:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --hostid cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-secret DHHC-1:03:Y2RhYTVlMWY3ODg2NTI5Mjg3ZTU2YjI3NjhlMWJlMjg5NmU5OWRjYzRiMWVlNTBjMDU5YzhmM2Q4YmE2YmFlNKIyA44=: 00:12:59.663 08:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:59.663 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:59.663 08:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:12:59.663 08:25:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.663 08:25:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.663 08:25:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:59.663 08:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:59.663 08:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:59.663 08:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:59.663 08:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:59.663 08:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:12:59.663 08:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:59.663 08:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:59.663 08:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:59.663 08:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:59.663 08:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:59.663 08:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:59.663 08:25:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.663 08:25:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.663 08:25:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:59.663 08:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:59.663 08:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:00.229 00:13:00.229 08:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:00.229 08:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:00.229 08:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:00.486 08:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:00.486 08:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:00.486 08:25:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.486 08:25:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.486 08:25:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.486 08:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:00.486 { 00:13:00.486 "cntlid": 129, 00:13:00.486 "qid": 0, 00:13:00.486 "state": "enabled", 00:13:00.486 "thread": "nvmf_tgt_poll_group_000", 00:13:00.486 "listen_address": { 00:13:00.486 "trtype": "TCP", 00:13:00.486 "adrfam": "IPv4", 00:13:00.486 "traddr": "10.0.0.2", 00:13:00.486 "trsvcid": "4420" 00:13:00.486 }, 00:13:00.486 "peer_address": { 00:13:00.486 "trtype": "TCP", 00:13:00.486 "adrfam": "IPv4", 00:13:00.486 "traddr": "10.0.0.1", 00:13:00.486 "trsvcid": "45934" 00:13:00.486 }, 00:13:00.486 "auth": { 00:13:00.486 "state": "completed", 00:13:00.486 "digest": "sha512", 00:13:00.487 "dhgroup": "ffdhe6144" 00:13:00.487 } 00:13:00.487 } 00:13:00.487 ]' 00:13:00.487 08:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:00.487 08:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:00.487 08:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:00.487 08:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:00.487 08:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:00.487 08:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:00.487 08:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:00.487 08:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:00.745 08:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --hostid cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-secret DHHC-1:00:YjNhMmEzYTEwZTYyNTM2OTgyMDZhMzA3NTE2NTY1ZjQ2MDhhMmE0MmMzZTEzYmU2Afx2NA==: --dhchap-ctrl-secret DHHC-1:03:MWRiMzU0MGExOWRhZGFlMWQwOWI2MTE1YzEzNzU4MWRkOTZiM2JjNmE5ZWI4NDgyZDQwOTljYjNjMWRlNmZiMw6pYio=: 00:13:01.693 08:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:01.693 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:01.693 08:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:13:01.693 08:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.693 08:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.693 08:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.693 08:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:01.693 08:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:01.693 08:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:02.010 08:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:13:02.010 08:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:02.010 08:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:02.010 08:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:02.010 08:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:02.010 08:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:02.010 08:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:02.010 08:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.010 08:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.010 08:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.010 08:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:02.010 08:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:02.268 00:13:02.268 08:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:02.268 08:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:02.268 08:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:02.527 08:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:02.527 08:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:02.527 08:25:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.527 08:25:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.527 08:25:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.527 08:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:02.527 { 00:13:02.527 "cntlid": 131, 00:13:02.527 "qid": 0, 00:13:02.527 "state": "enabled", 00:13:02.527 "thread": "nvmf_tgt_poll_group_000", 00:13:02.527 "listen_address": { 00:13:02.527 "trtype": "TCP", 00:13:02.527 "adrfam": "IPv4", 00:13:02.527 "traddr": "10.0.0.2", 00:13:02.527 "trsvcid": "4420" 00:13:02.527 }, 00:13:02.527 "peer_address": { 00:13:02.527 "trtype": "TCP", 00:13:02.527 "adrfam": "IPv4", 00:13:02.527 "traddr": "10.0.0.1", 00:13:02.527 "trsvcid": "45950" 00:13:02.527 }, 00:13:02.527 "auth": { 00:13:02.527 "state": "completed", 00:13:02.527 "digest": "sha512", 00:13:02.527 "dhgroup": "ffdhe6144" 00:13:02.527 } 00:13:02.527 } 00:13:02.527 ]' 00:13:02.527 08:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:02.527 08:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:02.527 08:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:02.527 08:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:02.527 08:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:02.784 08:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:02.784 08:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:02.784 08:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:03.042 08:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --hostid cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-secret DHHC-1:01:ODI5ZGZkYTdjMzE4NGMwN2NiNWNiYmMwZjRhYjI4ZWSfzRk6: --dhchap-ctrl-secret DHHC-1:02:NGY2NjE5NjA4YjA2Y2U2ZmI0YTg4MzgwMmJlMTUxMWE2ODM3YzMyZWI1ZWVjNDc3IVHbOw==: 00:13:03.607 08:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:03.607 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:03.607 08:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:13:03.607 08:25:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.607 08:25:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.607 08:25:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.607 08:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:03.607 08:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:03.607 08:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:03.864 08:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:13:03.864 08:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:03.864 08:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:03.864 08:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:03.864 08:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:03.864 08:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:03.864 08:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:03.865 08:25:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.865 08:25:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.865 08:25:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.865 08:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:03.865 08:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:04.430 00:13:04.430 08:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:04.430 08:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:04.430 08:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:04.688 08:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:04.688 08:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:04.688 08:25:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.688 08:25:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.688 08:25:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.688 08:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:04.688 { 00:13:04.688 "cntlid": 133, 00:13:04.688 "qid": 0, 00:13:04.688 "state": "enabled", 00:13:04.688 "thread": "nvmf_tgt_poll_group_000", 00:13:04.688 "listen_address": { 00:13:04.688 "trtype": "TCP", 00:13:04.688 "adrfam": "IPv4", 00:13:04.688 "traddr": "10.0.0.2", 00:13:04.688 "trsvcid": "4420" 00:13:04.688 }, 00:13:04.688 "peer_address": { 00:13:04.688 "trtype": "TCP", 00:13:04.688 "adrfam": "IPv4", 00:13:04.688 "traddr": "10.0.0.1", 00:13:04.688 "trsvcid": "49376" 00:13:04.688 }, 00:13:04.688 "auth": { 00:13:04.688 "state": "completed", 00:13:04.688 "digest": "sha512", 00:13:04.688 "dhgroup": "ffdhe6144" 00:13:04.688 } 00:13:04.688 } 00:13:04.688 ]' 00:13:04.688 08:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:04.688 08:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:04.688 08:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:04.688 08:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:04.688 08:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:04.688 08:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:04.688 08:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:04.688 08:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:04.946 08:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --hostid cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-secret DHHC-1:02:M2QzYmVhOWVjN2U2NGIzYjkzNzFhN2IzNjNiNzMxZjFlZDZkMDJiMTBkYmNmMGZlM8IJBw==: --dhchap-ctrl-secret DHHC-1:01:ZDliZWJlYmZmOGQ5MDVmM2E5OGQ2ZmJjZDhlYjA3MGT8TygM: 00:13:05.879 08:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:05.879 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:05.879 08:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:13:05.879 08:25:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:05.879 08:25:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.879 08:25:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:05.879 08:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:05.879 08:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:05.879 08:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:05.879 08:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:13:05.879 08:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:05.879 08:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:05.879 08:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:05.879 08:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:05.879 08:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:05.879 08:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-key key3 00:13:05.879 08:25:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:05.879 08:25:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.138 08:25:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.138 08:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:06.138 08:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:06.397 00:13:06.397 08:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:06.397 08:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:06.397 08:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:06.963 08:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:06.963 08:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:06.963 08:25:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.963 08:25:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.963 08:25:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.963 08:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:06.963 { 00:13:06.963 "cntlid": 135, 00:13:06.963 "qid": 0, 00:13:06.963 "state": "enabled", 00:13:06.963 "thread": "nvmf_tgt_poll_group_000", 00:13:06.963 "listen_address": { 00:13:06.963 "trtype": "TCP", 00:13:06.963 "adrfam": "IPv4", 00:13:06.963 "traddr": "10.0.0.2", 00:13:06.963 "trsvcid": "4420" 00:13:06.963 }, 00:13:06.963 "peer_address": { 00:13:06.963 "trtype": "TCP", 00:13:06.963 "adrfam": "IPv4", 00:13:06.963 "traddr": "10.0.0.1", 00:13:06.963 "trsvcid": "49418" 00:13:06.963 }, 00:13:06.963 "auth": { 00:13:06.963 "state": "completed", 00:13:06.963 "digest": "sha512", 00:13:06.963 "dhgroup": "ffdhe6144" 00:13:06.963 } 00:13:06.963 } 00:13:06.963 ]' 00:13:06.963 08:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:06.963 08:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:06.963 08:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:06.963 08:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:06.963 08:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:06.963 08:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:06.963 08:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:06.963 08:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:07.221 08:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --hostid cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-secret DHHC-1:03:Y2RhYTVlMWY3ODg2NTI5Mjg3ZTU2YjI3NjhlMWJlMjg5NmU5OWRjYzRiMWVlNTBjMDU5YzhmM2Q4YmE2YmFlNKIyA44=: 00:13:07.788 08:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:07.788 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:07.788 08:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:13:07.788 08:25:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.788 08:25:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.046 08:25:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.046 08:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:08.046 08:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:08.046 08:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:08.046 08:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:08.305 08:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:13:08.305 08:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:08.305 08:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:08.305 08:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:08.305 08:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:08.305 08:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:08.305 08:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:08.305 08:26:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.305 08:26:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.305 08:26:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.305 08:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:08.305 08:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:08.871 00:13:08.871 08:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:08.871 08:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:08.871 08:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:09.128 08:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:09.128 08:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:09.128 08:26:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.128 08:26:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.128 08:26:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.128 08:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:09.128 { 00:13:09.128 "cntlid": 137, 00:13:09.128 "qid": 0, 00:13:09.128 "state": "enabled", 00:13:09.128 "thread": "nvmf_tgt_poll_group_000", 00:13:09.128 "listen_address": { 00:13:09.128 "trtype": "TCP", 00:13:09.128 "adrfam": "IPv4", 00:13:09.128 "traddr": "10.0.0.2", 00:13:09.128 "trsvcid": "4420" 00:13:09.128 }, 00:13:09.128 "peer_address": { 00:13:09.128 "trtype": "TCP", 00:13:09.128 "adrfam": "IPv4", 00:13:09.128 "traddr": "10.0.0.1", 00:13:09.128 "trsvcid": "49442" 00:13:09.128 }, 00:13:09.128 "auth": { 00:13:09.128 "state": "completed", 00:13:09.128 "digest": "sha512", 00:13:09.128 "dhgroup": "ffdhe8192" 00:13:09.128 } 00:13:09.128 } 00:13:09.128 ]' 00:13:09.128 08:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:09.128 08:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:09.128 08:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:09.128 08:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:09.128 08:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:09.128 08:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:09.128 08:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:09.128 08:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:09.692 08:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --hostid cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-secret DHHC-1:00:YjNhMmEzYTEwZTYyNTM2OTgyMDZhMzA3NTE2NTY1ZjQ2MDhhMmE0MmMzZTEzYmU2Afx2NA==: --dhchap-ctrl-secret DHHC-1:03:MWRiMzU0MGExOWRhZGFlMWQwOWI2MTE1YzEzNzU4MWRkOTZiM2JjNmE5ZWI4NDgyZDQwOTljYjNjMWRlNmZiMw6pYio=: 00:13:10.259 08:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:10.259 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:10.259 08:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:13:10.259 08:26:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.259 08:26:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.259 08:26:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.259 08:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:10.259 08:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:10.259 08:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:10.517 08:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:13:10.517 08:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:10.517 08:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:10.517 08:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:10.517 08:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:10.517 08:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:10.517 08:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:10.517 08:26:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.517 08:26:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.517 08:26:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.517 08:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:10.517 08:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:11.083 00:13:11.342 08:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:11.342 08:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:11.342 08:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:11.600 08:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:11.600 08:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:11.600 08:26:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.600 08:26:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.600 08:26:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.600 08:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:11.600 { 00:13:11.600 "cntlid": 139, 00:13:11.600 "qid": 0, 00:13:11.600 "state": "enabled", 00:13:11.600 "thread": "nvmf_tgt_poll_group_000", 00:13:11.600 "listen_address": { 00:13:11.600 "trtype": "TCP", 00:13:11.600 "adrfam": "IPv4", 00:13:11.600 "traddr": "10.0.0.2", 00:13:11.600 "trsvcid": "4420" 00:13:11.600 }, 00:13:11.600 "peer_address": { 00:13:11.600 "trtype": "TCP", 00:13:11.600 "adrfam": "IPv4", 00:13:11.600 "traddr": "10.0.0.1", 00:13:11.600 "trsvcid": "49456" 00:13:11.600 }, 00:13:11.600 "auth": { 00:13:11.600 "state": "completed", 00:13:11.600 "digest": "sha512", 00:13:11.600 "dhgroup": "ffdhe8192" 00:13:11.600 } 00:13:11.600 } 00:13:11.600 ]' 00:13:11.600 08:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:11.600 08:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:11.600 08:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:11.600 08:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:11.600 08:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:11.600 08:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:11.600 08:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:11.600 08:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:11.858 08:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --hostid cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-secret DHHC-1:01:ODI5ZGZkYTdjMzE4NGMwN2NiNWNiYmMwZjRhYjI4ZWSfzRk6: --dhchap-ctrl-secret DHHC-1:02:NGY2NjE5NjA4YjA2Y2U2ZmI0YTg4MzgwMmJlMTUxMWE2ODM3YzMyZWI1ZWVjNDc3IVHbOw==: 00:13:12.794 08:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:12.794 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:12.794 08:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:13:12.794 08:26:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.794 08:26:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.794 08:26:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:12.794 08:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:12.794 08:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:12.794 08:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:12.794 08:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:13:12.794 08:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:12.794 08:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:12.794 08:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:12.794 08:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:12.794 08:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:12.794 08:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:12.794 08:26:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.794 08:26:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.794 08:26:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:12.795 08:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:12.795 08:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:13.730 00:13:13.730 08:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:13.730 08:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:13.730 08:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:13.730 08:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:13.730 08:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:13.730 08:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.731 08:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.731 08:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.731 08:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:13.731 { 00:13:13.731 "cntlid": 141, 00:13:13.731 "qid": 0, 00:13:13.731 "state": "enabled", 00:13:13.731 "thread": "nvmf_tgt_poll_group_000", 00:13:13.731 "listen_address": { 00:13:13.731 "trtype": "TCP", 00:13:13.731 "adrfam": "IPv4", 00:13:13.731 "traddr": "10.0.0.2", 00:13:13.731 "trsvcid": "4420" 00:13:13.731 }, 00:13:13.731 "peer_address": { 00:13:13.731 "trtype": "TCP", 00:13:13.731 "adrfam": "IPv4", 00:13:13.731 "traddr": "10.0.0.1", 00:13:13.731 "trsvcid": "49482" 00:13:13.731 }, 00:13:13.731 "auth": { 00:13:13.731 "state": "completed", 00:13:13.731 "digest": "sha512", 00:13:13.731 "dhgroup": "ffdhe8192" 00:13:13.731 } 00:13:13.731 } 00:13:13.731 ]' 00:13:13.731 08:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:13.731 08:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:13.731 08:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:13.989 08:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:13.989 08:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:13.989 08:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:13.989 08:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:13.989 08:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:14.247 08:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --hostid cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-secret DHHC-1:02:M2QzYmVhOWVjN2U2NGIzYjkzNzFhN2IzNjNiNzMxZjFlZDZkMDJiMTBkYmNmMGZlM8IJBw==: --dhchap-ctrl-secret DHHC-1:01:ZDliZWJlYmZmOGQ5MDVmM2E5OGQ2ZmJjZDhlYjA3MGT8TygM: 00:13:14.819 08:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:14.819 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:14.819 08:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:13:14.819 08:26:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.819 08:26:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.819 08:26:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.819 08:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:14.819 08:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:14.819 08:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:15.077 08:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:13:15.077 08:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:15.077 08:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:15.077 08:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:15.077 08:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:15.077 08:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:15.077 08:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-key key3 00:13:15.077 08:26:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.077 08:26:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.077 08:26:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.077 08:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:15.077 08:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:16.013 00:13:16.013 08:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:16.013 08:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:16.013 08:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:16.013 08:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:16.013 08:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:16.013 08:26:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.013 08:26:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.013 08:26:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.013 08:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:16.013 { 00:13:16.013 "cntlid": 143, 00:13:16.013 "qid": 0, 00:13:16.013 "state": "enabled", 00:13:16.013 "thread": "nvmf_tgt_poll_group_000", 00:13:16.013 "listen_address": { 00:13:16.014 "trtype": "TCP", 00:13:16.014 "adrfam": "IPv4", 00:13:16.014 "traddr": "10.0.0.2", 00:13:16.014 "trsvcid": "4420" 00:13:16.014 }, 00:13:16.014 "peer_address": { 00:13:16.014 "trtype": "TCP", 00:13:16.014 "adrfam": "IPv4", 00:13:16.014 "traddr": "10.0.0.1", 00:13:16.014 "trsvcid": "50630" 00:13:16.014 }, 00:13:16.014 "auth": { 00:13:16.014 "state": "completed", 00:13:16.014 "digest": "sha512", 00:13:16.014 "dhgroup": "ffdhe8192" 00:13:16.014 } 00:13:16.014 } 00:13:16.014 ]' 00:13:16.014 08:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:16.014 08:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:16.014 08:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:16.272 08:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:16.272 08:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:16.272 08:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:16.272 08:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:16.272 08:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:16.530 08:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --hostid cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-secret DHHC-1:03:Y2RhYTVlMWY3ODg2NTI5Mjg3ZTU2YjI3NjhlMWJlMjg5NmU5OWRjYzRiMWVlNTBjMDU5YzhmM2Q4YmE2YmFlNKIyA44=: 00:13:17.103 08:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:17.103 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:17.103 08:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:13:17.103 08:26:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.103 08:26:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.362 08:26:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.362 08:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:13:17.362 08:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:13:17.362 08:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:13:17.362 08:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:17.362 08:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:17.362 08:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:17.362 08:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:13:17.362 08:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:17.362 08:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:17.362 08:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:17.362 08:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:17.362 08:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:17.362 08:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:17.362 08:26:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.362 08:26:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.362 08:26:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.362 08:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:17.362 08:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:18.295 00:13:18.295 08:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:18.295 08:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:18.295 08:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:18.295 08:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:18.295 08:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:18.295 08:26:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:18.295 08:26:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.295 08:26:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:18.295 08:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:18.295 { 00:13:18.295 "cntlid": 145, 00:13:18.295 "qid": 0, 00:13:18.295 "state": "enabled", 00:13:18.295 "thread": "nvmf_tgt_poll_group_000", 00:13:18.295 "listen_address": { 00:13:18.295 "trtype": "TCP", 00:13:18.295 "adrfam": "IPv4", 00:13:18.295 "traddr": "10.0.0.2", 00:13:18.295 "trsvcid": "4420" 00:13:18.295 }, 00:13:18.295 "peer_address": { 00:13:18.295 "trtype": "TCP", 00:13:18.295 "adrfam": "IPv4", 00:13:18.295 "traddr": "10.0.0.1", 00:13:18.295 "trsvcid": "50646" 00:13:18.295 }, 00:13:18.295 "auth": { 00:13:18.295 "state": "completed", 00:13:18.295 "digest": "sha512", 00:13:18.295 "dhgroup": "ffdhe8192" 00:13:18.295 } 00:13:18.295 } 00:13:18.295 ]' 00:13:18.553 08:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:18.553 08:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:18.553 08:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:18.553 08:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:18.553 08:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:18.553 08:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:18.553 08:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:18.553 08:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:18.811 08:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --hostid cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-secret DHHC-1:00:YjNhMmEzYTEwZTYyNTM2OTgyMDZhMzA3NTE2NTY1ZjQ2MDhhMmE0MmMzZTEzYmU2Afx2NA==: --dhchap-ctrl-secret DHHC-1:03:MWRiMzU0MGExOWRhZGFlMWQwOWI2MTE1YzEzNzU4MWRkOTZiM2JjNmE5ZWI4NDgyZDQwOTljYjNjMWRlNmZiMw6pYio=: 00:13:19.755 08:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:19.755 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:19.755 08:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:13:19.755 08:26:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.755 08:26:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.755 08:26:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.755 08:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-key key1 00:13:19.755 08:26:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.755 08:26:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.755 08:26:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.755 08:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:13:19.755 08:26:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:13:19.755 08:26:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:13:19.755 08:26:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:13:19.755 08:26:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:19.755 08:26:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:13:19.755 08:26:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:19.755 08:26:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:13:19.755 08:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:13:20.321 request: 00:13:20.321 { 00:13:20.321 "name": "nvme0", 00:13:20.321 "trtype": "tcp", 00:13:20.321 "traddr": "10.0.0.2", 00:13:20.321 "adrfam": "ipv4", 00:13:20.321 "trsvcid": "4420", 00:13:20.321 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:20.321 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6", 00:13:20.321 "prchk_reftag": false, 00:13:20.321 "prchk_guard": false, 00:13:20.321 "hdgst": false, 00:13:20.321 "ddgst": false, 00:13:20.321 "dhchap_key": "key2", 00:13:20.321 "method": "bdev_nvme_attach_controller", 00:13:20.321 "req_id": 1 00:13:20.321 } 00:13:20.321 Got JSON-RPC error response 00:13:20.321 response: 00:13:20.321 { 00:13:20.321 "code": -5, 00:13:20.321 "message": "Input/output error" 00:13:20.321 } 00:13:20.321 08:26:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:13:20.321 08:26:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:20.321 08:26:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:20.321 08:26:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:20.321 08:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:13:20.321 08:26:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:20.321 08:26:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.321 08:26:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.321 08:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:20.321 08:26:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:20.321 08:26:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.321 08:26:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.321 08:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:20.321 08:26:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:13:20.321 08:26:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:20.321 08:26:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:13:20.322 08:26:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:20.322 08:26:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:13:20.322 08:26:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:20.322 08:26:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:20.322 08:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:20.888 request: 00:13:20.888 { 00:13:20.888 "name": "nvme0", 00:13:20.888 "trtype": "tcp", 00:13:20.888 "traddr": "10.0.0.2", 00:13:20.888 "adrfam": "ipv4", 00:13:20.888 "trsvcid": "4420", 00:13:20.888 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:20.888 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6", 00:13:20.888 "prchk_reftag": false, 00:13:20.888 "prchk_guard": false, 00:13:20.888 "hdgst": false, 00:13:20.888 "ddgst": false, 00:13:20.888 "dhchap_key": "key1", 00:13:20.888 "dhchap_ctrlr_key": "ckey2", 00:13:20.888 "method": "bdev_nvme_attach_controller", 00:13:20.888 "req_id": 1 00:13:20.888 } 00:13:20.888 Got JSON-RPC error response 00:13:20.888 response: 00:13:20.888 { 00:13:20.888 "code": -5, 00:13:20.888 "message": "Input/output error" 00:13:20.888 } 00:13:20.888 08:26:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:13:20.888 08:26:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:20.888 08:26:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:20.888 08:26:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:20.888 08:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:13:20.888 08:26:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:20.888 08:26:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.888 08:26:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.888 08:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-key key1 00:13:20.888 08:26:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:20.888 08:26:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.888 08:26:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.888 08:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:20.888 08:26:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:13:20.888 08:26:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:20.888 08:26:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:13:20.888 08:26:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:20.888 08:26:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:13:20.888 08:26:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:20.888 08:26:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:20.888 08:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:21.454 request: 00:13:21.454 { 00:13:21.454 "name": "nvme0", 00:13:21.454 "trtype": "tcp", 00:13:21.454 "traddr": "10.0.0.2", 00:13:21.454 "adrfam": "ipv4", 00:13:21.454 "trsvcid": "4420", 00:13:21.454 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:21.454 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6", 00:13:21.454 "prchk_reftag": false, 00:13:21.454 "prchk_guard": false, 00:13:21.454 "hdgst": false, 00:13:21.454 "ddgst": false, 00:13:21.454 "dhchap_key": "key1", 00:13:21.454 "dhchap_ctrlr_key": "ckey1", 00:13:21.454 "method": "bdev_nvme_attach_controller", 00:13:21.454 "req_id": 1 00:13:21.454 } 00:13:21.454 Got JSON-RPC error response 00:13:21.454 response: 00:13:21.454 { 00:13:21.454 "code": -5, 00:13:21.454 "message": "Input/output error" 00:13:21.454 } 00:13:21.454 08:26:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:13:21.454 08:26:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:21.454 08:26:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:21.454 08:26:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:21.454 08:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:13:21.454 08:26:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.454 08:26:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.454 08:26:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.454 08:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 69410 00:13:21.454 08:26:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 69410 ']' 00:13:21.454 08:26:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 69410 00:13:21.712 08:26:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:13:21.712 08:26:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:21.712 08:26:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 69410 00:13:21.712 killing process with pid 69410 00:13:21.712 08:26:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:21.712 08:26:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:21.712 08:26:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 69410' 00:13:21.712 08:26:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 69410 00:13:21.712 08:26:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 69410 00:13:21.712 08:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:13:21.712 08:26:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:21.712 08:26:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:21.712 08:26:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.712 08:26:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=72438 00:13:21.712 08:26:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 72438 00:13:21.712 08:26:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 72438 ']' 00:13:21.712 08:26:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:13:21.712 08:26:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:21.712 08:26:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:21.712 08:26:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:21.712 08:26:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:21.712 08:26:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.085 08:26:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:23.085 08:26:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:13:23.085 08:26:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:23.085 08:26:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:23.085 08:26:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.085 08:26:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:23.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:23.085 08:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:13:23.085 08:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 72438 00:13:23.085 08:26:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 72438 ']' 00:13:23.085 08:26:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:23.085 08:26:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:23.085 08:26:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:23.085 08:26:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:23.085 08:26:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.085 08:26:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:23.085 08:26:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:13:23.085 08:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:13:23.085 08:26:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.085 08:26:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.344 08:26:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.344 08:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:13:23.344 08:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:23.344 08:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:23.344 08:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:23.344 08:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:23.344 08:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:23.344 08:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-key key3 00:13:23.344 08:26:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.344 08:26:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.344 08:26:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.344 08:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:23.344 08:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:23.912 00:13:23.912 08:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:23.912 08:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:23.912 08:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:24.175 08:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:24.175 08:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:24.175 08:26:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:24.175 08:26:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.175 08:26:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:24.175 08:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:24.175 { 00:13:24.175 "cntlid": 1, 00:13:24.175 "qid": 0, 00:13:24.175 "state": "enabled", 00:13:24.175 "thread": "nvmf_tgt_poll_group_000", 00:13:24.175 "listen_address": { 00:13:24.175 "trtype": "TCP", 00:13:24.175 "adrfam": "IPv4", 00:13:24.175 "traddr": "10.0.0.2", 00:13:24.175 "trsvcid": "4420" 00:13:24.175 }, 00:13:24.175 "peer_address": { 00:13:24.175 "trtype": "TCP", 00:13:24.175 "adrfam": "IPv4", 00:13:24.175 "traddr": "10.0.0.1", 00:13:24.175 "trsvcid": "50700" 00:13:24.175 }, 00:13:24.175 "auth": { 00:13:24.175 "state": "completed", 00:13:24.175 "digest": "sha512", 00:13:24.175 "dhgroup": "ffdhe8192" 00:13:24.175 } 00:13:24.175 } 00:13:24.175 ]' 00:13:24.175 08:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:24.175 08:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:24.175 08:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:24.433 08:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:24.433 08:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:24.433 08:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:24.433 08:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:24.433 08:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:24.691 08:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --hostid cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-secret DHHC-1:03:Y2RhYTVlMWY3ODg2NTI5Mjg3ZTU2YjI3NjhlMWJlMjg5NmU5OWRjYzRiMWVlNTBjMDU5YzhmM2Q4YmE2YmFlNKIyA44=: 00:13:25.633 08:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:25.633 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:25.633 08:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:13:25.633 08:26:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.633 08:26:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.633 08:26:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.633 08:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --dhchap-key key3 00:13:25.633 08:26:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.633 08:26:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.633 08:26:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.633 08:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:13:25.633 08:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:13:25.633 08:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:25.633 08:26:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:13:25.633 08:26:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:25.633 08:26:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:13:25.633 08:26:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:25.633 08:26:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:13:25.633 08:26:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:25.633 08:26:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:25.633 08:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:25.898 request: 00:13:25.898 { 00:13:25.898 "name": "nvme0", 00:13:25.898 "trtype": "tcp", 00:13:25.898 "traddr": "10.0.0.2", 00:13:25.898 "adrfam": "ipv4", 00:13:25.898 "trsvcid": "4420", 00:13:25.898 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:25.898 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6", 00:13:25.898 "prchk_reftag": false, 00:13:25.898 "prchk_guard": false, 00:13:25.898 "hdgst": false, 00:13:25.898 "ddgst": false, 00:13:25.898 "dhchap_key": "key3", 00:13:25.898 "method": "bdev_nvme_attach_controller", 00:13:25.898 "req_id": 1 00:13:25.898 } 00:13:25.898 Got JSON-RPC error response 00:13:25.898 response: 00:13:25.898 { 00:13:25.898 "code": -5, 00:13:25.898 "message": "Input/output error" 00:13:25.898 } 00:13:26.165 08:26:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:13:26.165 08:26:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:26.166 08:26:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:26.166 08:26:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:26.166 08:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:13:26.166 08:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:13:26.166 08:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:13:26.166 08:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:13:26.166 08:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:26.166 08:26:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:13:26.166 08:26:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:26.166 08:26:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:13:26.166 08:26:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:26.166 08:26:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:13:26.166 08:26:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:26.166 08:26:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:26.166 08:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:26.435 request: 00:13:26.435 { 00:13:26.435 "name": "nvme0", 00:13:26.435 "trtype": "tcp", 00:13:26.435 "traddr": "10.0.0.2", 00:13:26.435 "adrfam": "ipv4", 00:13:26.435 "trsvcid": "4420", 00:13:26.435 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:26.435 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6", 00:13:26.435 "prchk_reftag": false, 00:13:26.435 "prchk_guard": false, 00:13:26.435 "hdgst": false, 00:13:26.435 "ddgst": false, 00:13:26.435 "dhchap_key": "key3", 00:13:26.435 "method": "bdev_nvme_attach_controller", 00:13:26.435 "req_id": 1 00:13:26.435 } 00:13:26.435 Got JSON-RPC error response 00:13:26.435 response: 00:13:26.435 { 00:13:26.435 "code": -5, 00:13:26.435 "message": "Input/output error" 00:13:26.435 } 00:13:26.435 08:26:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:13:26.435 08:26:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:26.435 08:26:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:26.435 08:26:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:26.435 08:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:13:26.435 08:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:13:26.435 08:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:13:26.435 08:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:26.435 08:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:26.435 08:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:26.705 08:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:13:26.705 08:26:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.705 08:26:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.705 08:26:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.705 08:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:13:26.705 08:26:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.705 08:26:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.705 08:26:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.705 08:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:26.705 08:26:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:13:26.705 08:26:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:26.705 08:26:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:13:26.705 08:26:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:26.705 08:26:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:13:26.976 08:26:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:26.976 08:26:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:26.976 08:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:26.976 request: 00:13:26.976 { 00:13:26.976 "name": "nvme0", 00:13:26.976 "trtype": "tcp", 00:13:26.976 "traddr": "10.0.0.2", 00:13:26.976 "adrfam": "ipv4", 00:13:26.976 "trsvcid": "4420", 00:13:26.976 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:26.976 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6", 00:13:26.976 "prchk_reftag": false, 00:13:26.976 "prchk_guard": false, 00:13:26.976 "hdgst": false, 00:13:26.976 "ddgst": false, 00:13:26.976 "dhchap_key": "key0", 00:13:26.976 "dhchap_ctrlr_key": "key1", 00:13:26.976 "method": "bdev_nvme_attach_controller", 00:13:26.976 "req_id": 1 00:13:26.976 } 00:13:26.976 Got JSON-RPC error response 00:13:26.976 response: 00:13:26.976 { 00:13:26.976 "code": -5, 00:13:26.976 "message": "Input/output error" 00:13:26.976 } 00:13:26.976 08:26:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:13:26.976 08:26:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:26.976 08:26:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:26.976 08:26:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:26.976 08:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:13:26.976 08:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:13:27.544 00:13:27.544 08:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:13:27.544 08:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:13:27.544 08:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:27.802 08:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:27.802 08:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:27.802 08:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:28.060 08:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:13:28.060 08:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:13:28.060 08:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 69442 00:13:28.060 08:26:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 69442 ']' 00:13:28.060 08:26:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 69442 00:13:28.060 08:26:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:13:28.060 08:26:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:28.060 08:26:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 69442 00:13:28.060 killing process with pid 69442 00:13:28.060 08:26:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:28.060 08:26:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:28.060 08:26:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 69442' 00:13:28.060 08:26:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 69442 00:13:28.060 08:26:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 69442 00:13:28.318 08:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:13:28.318 08:26:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:28.318 08:26:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:13:28.577 08:26:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:28.577 08:26:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:13:28.577 08:26:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:28.577 08:26:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:28.577 rmmod nvme_tcp 00:13:28.577 rmmod nvme_fabrics 00:13:28.577 rmmod nvme_keyring 00:13:28.577 08:26:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:28.577 08:26:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:13:28.577 08:26:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:13:28.577 08:26:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 72438 ']' 00:13:28.577 08:26:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 72438 00:13:28.577 08:26:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 72438 ']' 00:13:28.577 08:26:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 72438 00:13:28.577 08:26:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:13:28.577 08:26:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:28.577 08:26:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72438 00:13:28.577 killing process with pid 72438 00:13:28.577 08:26:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:28.577 08:26:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:28.577 08:26:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72438' 00:13:28.577 08:26:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 72438 00:13:28.577 08:26:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 72438 00:13:28.835 08:26:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:28.835 08:26:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:28.835 08:26:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:28.836 08:26:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:28.836 08:26:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:28.836 08:26:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:28.836 08:26:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:28.836 08:26:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:28.836 08:26:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:28.836 08:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.UPM /tmp/spdk.key-sha256.rlC /tmp/spdk.key-sha384.rhK /tmp/spdk.key-sha512.ghU /tmp/spdk.key-sha512.uM5 /tmp/spdk.key-sha384.YB3 /tmp/spdk.key-sha256.zvK '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:13:28.836 ************************************ 00:13:28.836 END TEST nvmf_auth_target 00:13:28.836 ************************************ 00:13:28.836 00:13:28.836 real 2m48.837s 00:13:28.836 user 6m44.853s 00:13:28.836 sys 0m25.997s 00:13:28.836 08:26:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:28.836 08:26:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.836 08:26:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:28.836 08:26:20 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:13:28.836 08:26:20 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:13:28.836 08:26:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:13:28.836 08:26:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:28.836 08:26:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:28.836 ************************************ 00:13:28.836 START TEST nvmf_bdevio_no_huge 00:13:28.836 ************************************ 00:13:28.836 08:26:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:13:28.836 * Looking for test storage... 00:13:28.836 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:28.836 08:26:20 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:28.836 08:26:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:13:28.836 08:26:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:28.836 08:26:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:28.836 08:26:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:28.836 08:26:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:28.836 08:26:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:28.836 08:26:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:28.836 08:26:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:28.836 08:26:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:28.836 08:26:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:28.836 08:26:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:28.836 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:13:28.836 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:13:28.836 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:28.836 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:28.836 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:28.836 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:28.836 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:29.094 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:29.094 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:29.094 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:29.094 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.095 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.095 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.095 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:13:29.095 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.095 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:13:29.095 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:29.095 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:29.095 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:29.095 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:29.095 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:29.095 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:29.095 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:29.095 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:29.095 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:29.095 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:29.095 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:13:29.095 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:29.095 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:29.095 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:29.095 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:29.095 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:29.095 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:29.095 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:29.095 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:29.095 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:29.095 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:29.095 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:29.095 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:29.095 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:29.095 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:29.095 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:29.095 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:29.095 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:29.095 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:29.095 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:29.095 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:29.095 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:29.095 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:29.095 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:29.095 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:29.095 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:29.095 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:29.095 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:29.095 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:29.095 Cannot find device "nvmf_tgt_br" 00:13:29.095 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # true 00:13:29.095 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:29.095 Cannot find device "nvmf_tgt_br2" 00:13:29.095 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # true 00:13:29.095 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:29.095 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:29.095 Cannot find device "nvmf_tgt_br" 00:13:29.095 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # true 00:13:29.095 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:29.095 Cannot find device "nvmf_tgt_br2" 00:13:29.095 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # true 00:13:29.095 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:29.095 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:29.095 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:29.095 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:29.095 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:13:29.095 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:29.095 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:29.095 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:13:29.095 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:29.095 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:29.095 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:29.095 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:29.095 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:29.095 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:29.095 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:29.095 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:29.095 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:29.095 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:29.095 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:29.095 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:29.095 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:29.095 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:29.354 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:29.354 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:29.354 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:29.354 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:29.354 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:29.354 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:29.354 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:29.354 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:29.354 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:29.354 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:29.354 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:29.354 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:13:29.354 00:13:29.354 --- 10.0.0.2 ping statistics --- 00:13:29.354 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:29.354 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:13:29.354 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:29.354 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:29.354 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.083 ms 00:13:29.354 00:13:29.354 --- 10.0.0.3 ping statistics --- 00:13:29.354 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:29.354 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:13:29.354 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:29.354 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:29.354 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:13:29.354 00:13:29.354 --- 10.0.0.1 ping statistics --- 00:13:29.354 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:29.354 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:13:29.354 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:29.354 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@433 -- # return 0 00:13:29.354 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:29.354 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:29.354 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:29.354 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:29.354 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:29.354 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:29.354 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:29.354 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:13:29.354 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:29.354 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:29.354 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:29.354 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=72758 00:13:29.354 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:13:29.354 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 72758 00:13:29.354 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 72758 ']' 00:13:29.354 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:29.354 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:29.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:29.354 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:29.354 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:29.354 08:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:29.354 [2024-07-15 08:26:21.449805] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:13:29.354 [2024-07-15 08:26:21.449927] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:13:29.613 [2024-07-15 08:26:21.593051] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:29.872 [2024-07-15 08:26:21.787300] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:29.872 [2024-07-15 08:26:21.787360] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:29.872 [2024-07-15 08:26:21.787375] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:29.872 [2024-07-15 08:26:21.787385] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:29.872 [2024-07-15 08:26:21.787395] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:29.872 [2024-07-15 08:26:21.787494] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:13:29.872 [2024-07-15 08:26:21.787770] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:13:29.872 [2024-07-15 08:26:21.789761] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:13:29.872 [2024-07-15 08:26:21.789779] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:29.872 [2024-07-15 08:26:21.795373] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:30.439 08:26:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:30.439 08:26:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:13:30.439 08:26:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:30.439 08:26:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:30.439 08:26:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:30.439 08:26:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:30.439 08:26:22 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:30.439 08:26:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.439 08:26:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:30.439 [2024-07-15 08:26:22.519593] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:30.439 08:26:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.439 08:26:22 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:30.439 08:26:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.439 08:26:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:30.439 Malloc0 00:13:30.439 08:26:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.439 08:26:22 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:30.439 08:26:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.439 08:26:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:30.439 08:26:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.439 08:26:22 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:30.439 08:26:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.439 08:26:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:30.439 08:26:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.439 08:26:22 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:30.439 08:26:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.439 08:26:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:30.439 [2024-07-15 08:26:22.559753] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:30.439 08:26:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.439 08:26:22 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:13:30.439 08:26:22 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:13:30.439 08:26:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:13:30.439 08:26:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:13:30.439 08:26:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:30.439 08:26:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:30.439 { 00:13:30.439 "params": { 00:13:30.439 "name": "Nvme$subsystem", 00:13:30.439 "trtype": "$TEST_TRANSPORT", 00:13:30.439 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:30.439 "adrfam": "ipv4", 00:13:30.439 "trsvcid": "$NVMF_PORT", 00:13:30.439 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:30.439 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:30.439 "hdgst": ${hdgst:-false}, 00:13:30.439 "ddgst": ${ddgst:-false} 00:13:30.439 }, 00:13:30.439 "method": "bdev_nvme_attach_controller" 00:13:30.439 } 00:13:30.439 EOF 00:13:30.439 )") 00:13:30.439 08:26:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:13:30.439 08:26:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:13:30.439 08:26:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:13:30.439 08:26:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:30.439 "params": { 00:13:30.439 "name": "Nvme1", 00:13:30.439 "trtype": "tcp", 00:13:30.439 "traddr": "10.0.0.2", 00:13:30.439 "adrfam": "ipv4", 00:13:30.439 "trsvcid": "4420", 00:13:30.439 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:30.439 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:30.439 "hdgst": false, 00:13:30.439 "ddgst": false 00:13:30.439 }, 00:13:30.439 "method": "bdev_nvme_attach_controller" 00:13:30.439 }' 00:13:30.697 [2024-07-15 08:26:22.617354] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:13:30.697 [2024-07-15 08:26:22.617477] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid72794 ] 00:13:30.697 [2024-07-15 08:26:22.764125] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:30.955 [2024-07-15 08:26:22.915751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:30.955 [2024-07-15 08:26:22.917767] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:30.955 [2024-07-15 08:26:22.917790] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:30.955 [2024-07-15 08:26:22.931701] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:30.955 I/O targets: 00:13:30.955 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:13:30.955 00:13:30.955 00:13:30.955 CUnit - A unit testing framework for C - Version 2.1-3 00:13:30.955 http://cunit.sourceforge.net/ 00:13:30.955 00:13:30.955 00:13:30.955 Suite: bdevio tests on: Nvme1n1 00:13:30.955 Test: blockdev write read block ...passed 00:13:30.955 Test: blockdev write zeroes read block ...passed 00:13:30.955 Test: blockdev write zeroes read no split ...passed 00:13:30.955 Test: blockdev write zeroes read split ...passed 00:13:31.213 Test: blockdev write zeroes read split partial ...passed 00:13:31.213 Test: blockdev reset ...[2024-07-15 08:26:23.139275] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:13:31.213 [2024-07-15 08:26:23.139402] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1334870 (9): Bad file descriptor 00:13:31.213 passed 00:13:31.213 Test: blockdev write read 8 blocks ...[2024-07-15 08:26:23.151090] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:31.213 passed 00:13:31.213 Test: blockdev write read size > 128k ...passed 00:13:31.213 Test: blockdev write read invalid size ...passed 00:13:31.213 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:31.213 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:31.213 Test: blockdev write read max offset ...passed 00:13:31.213 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:31.213 Test: blockdev writev readv 8 blocks ...passed 00:13:31.213 Test: blockdev writev readv 30 x 1block ...passed 00:13:31.213 Test: blockdev writev readv block ...passed 00:13:31.213 Test: blockdev writev readv size > 128k ...passed 00:13:31.213 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:31.213 Test: blockdev comparev and writev ...[2024-07-15 08:26:23.158874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:31.213 [2024-07-15 08:26:23.158911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:13:31.213 [2024-07-15 08:26:23.158938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:31.213 [2024-07-15 08:26:23.158950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:13:31.213 passed 00:13:31.213 Test: blockdev nvme passthru rw ...[2024-07-15 08:26:23.159303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:31.213 [2024-07-15 08:26:23.159325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:13:31.214 [2024-07-15 08:26:23.159341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:31.214 [2024-07-15 08:26:23.159351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:13:31.214 [2024-07-15 08:26:23.159620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:31.214 [2024-07-15 08:26:23.159637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:13:31.214 [2024-07-15 08:26:23.159654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:31.214 [2024-07-15 08:26:23.159664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:13:31.214 [2024-07-15 08:26:23.159934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:31.214 [2024-07-15 08:26:23.159951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:13:31.214 [2024-07-15 08:26:23.159967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:31.214 [2024-07-15 08:26:23.159977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:13:31.214 passed 00:13:31.214 Test: blockdev nvme passthru vendor specific ...[2024-07-15 08:26:23.160654] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:31.214 [2024-07-15 08:26:23.160677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:13:31.214 [2024-07-15 08:26:23.160819] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:31.214 [2024-07-15 08:26:23.160838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:13:31.214 [2024-07-15 08:26:23.160941] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:31.214 [2024-07-15 08:26:23.160956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:13:31.214 [2024-07-15 08:26:23.161059] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:31.214 [2024-07-15 08:26:23.161075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:13:31.214 passed 00:13:31.214 Test: blockdev nvme admin passthru ...passed 00:13:31.214 Test: blockdev copy ...passed 00:13:31.214 00:13:31.214 Run Summary: Type Total Ran Passed Failed Inactive 00:13:31.214 suites 1 1 n/a 0 0 00:13:31.214 tests 23 23 23 0 0 00:13:31.214 asserts 152 152 152 0 n/a 00:13:31.214 00:13:31.214 Elapsed time = 0.179 seconds 00:13:31.472 08:26:23 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:31.472 08:26:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.472 08:26:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:31.472 08:26:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.472 08:26:23 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:13:31.472 08:26:23 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:13:31.472 08:26:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:31.472 08:26:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:13:31.472 08:26:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:31.472 08:26:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:13:31.472 08:26:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:31.472 08:26:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:31.472 rmmod nvme_tcp 00:13:31.472 rmmod nvme_fabrics 00:13:31.472 rmmod nvme_keyring 00:13:31.472 08:26:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:31.472 08:26:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:13:31.472 08:26:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:13:31.472 08:26:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 72758 ']' 00:13:31.472 08:26:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 72758 00:13:31.472 08:26:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 72758 ']' 00:13:31.472 08:26:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 72758 00:13:31.472 08:26:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:13:31.472 08:26:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:31.472 08:26:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72758 00:13:31.730 08:26:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:13:31.730 killing process with pid 72758 00:13:31.730 08:26:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:13:31.730 08:26:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72758' 00:13:31.730 08:26:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 72758 00:13:31.730 08:26:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 72758 00:13:31.989 08:26:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:31.989 08:26:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:31.989 08:26:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:31.989 08:26:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:31.989 08:26:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:31.989 08:26:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:31.989 08:26:24 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:31.989 08:26:24 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:31.989 08:26:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:31.989 00:13:31.989 real 0m3.231s 00:13:31.989 user 0m10.365s 00:13:31.989 sys 0m1.361s 00:13:31.989 08:26:24 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:31.989 08:26:24 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:31.989 ************************************ 00:13:31.989 END TEST nvmf_bdevio_no_huge 00:13:31.989 ************************************ 00:13:32.247 08:26:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:32.247 08:26:24 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:13:32.247 08:26:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:32.247 08:26:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:32.247 08:26:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:32.247 ************************************ 00:13:32.247 START TEST nvmf_tls 00:13:32.247 ************************************ 00:13:32.247 08:26:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:13:32.247 * Looking for test storage... 00:13:32.247 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:32.247 08:26:24 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:32.247 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:13:32.247 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:32.247 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:32.247 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:32.247 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:32.247 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:32.247 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:32.247 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:32.247 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:32.247 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:32.247 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:32.247 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:13:32.247 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:13:32.247 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:32.247 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:32.247 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:32.247 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:32.247 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:32.247 08:26:24 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:32.247 08:26:24 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:32.247 08:26:24 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:32.247 08:26:24 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.247 08:26:24 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.247 08:26:24 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.247 08:26:24 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:13:32.248 08:26:24 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.248 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:13:32.248 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:32.248 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:32.248 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:32.248 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:32.248 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:32.248 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:32.248 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:32.248 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:32.248 08:26:24 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:32.248 08:26:24 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:13:32.248 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:32.248 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:32.248 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:32.248 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:32.248 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:32.248 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:32.248 08:26:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:32.248 08:26:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:32.248 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:32.248 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:32.248 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:32.248 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:32.248 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:32.248 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:32.248 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:32.248 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:32.248 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:32.248 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:32.248 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:32.248 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:32.248 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:32.248 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:32.248 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:32.248 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:32.248 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:32.248 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:32.248 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:32.248 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:32.248 Cannot find device "nvmf_tgt_br" 00:13:32.248 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # true 00:13:32.248 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:32.248 Cannot find device "nvmf_tgt_br2" 00:13:32.248 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # true 00:13:32.248 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:32.248 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:32.248 Cannot find device "nvmf_tgt_br" 00:13:32.248 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # true 00:13:32.248 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:32.248 Cannot find device "nvmf_tgt_br2" 00:13:32.248 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # true 00:13:32.248 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:32.248 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:32.507 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:32.507 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:32.507 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # true 00:13:32.507 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:32.507 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:32.507 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # true 00:13:32.507 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:32.507 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:32.507 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:32.507 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:32.507 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:32.507 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:32.507 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:32.507 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:32.507 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:32.507 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:32.507 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:32.507 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:32.507 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:32.507 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:32.507 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:32.507 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:32.507 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:32.507 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:32.507 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:32.507 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:32.507 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:32.507 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:32.507 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:32.507 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:32.507 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:32.507 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:13:32.507 00:13:32.507 --- 10.0.0.2 ping statistics --- 00:13:32.507 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:32.507 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:13:32.507 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:32.507 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:32.507 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:13:32.507 00:13:32.507 --- 10.0.0.3 ping statistics --- 00:13:32.507 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:32.507 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:13:32.507 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:32.507 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:32.507 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:13:32.507 00:13:32.507 --- 10.0.0.1 ping statistics --- 00:13:32.507 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:32.507 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:13:32.507 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:32.507 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@433 -- # return 0 00:13:32.507 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:32.507 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:32.507 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:32.507 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:32.507 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:32.507 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:32.507 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:32.507 08:26:24 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:13:32.507 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:32.507 08:26:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:32.507 08:26:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:32.507 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=72981 00:13:32.507 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:13:32.507 08:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 72981 00:13:32.507 08:26:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 72981 ']' 00:13:32.507 08:26:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:32.507 08:26:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:32.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:32.507 08:26:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:32.507 08:26:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:32.507 08:26:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:32.766 [2024-07-15 08:26:24.708004] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:13:32.766 [2024-07-15 08:26:24.708150] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:32.766 [2024-07-15 08:26:24.848215] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:33.025 [2024-07-15 08:26:24.967058] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:33.025 [2024-07-15 08:26:24.967168] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:33.025 [2024-07-15 08:26:24.967191] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:33.025 [2024-07-15 08:26:24.967201] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:33.025 [2024-07-15 08:26:24.967210] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:33.025 [2024-07-15 08:26:24.967252] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:33.961 08:26:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:33.961 08:26:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:33.961 08:26:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:33.961 08:26:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:33.961 08:26:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:33.961 08:26:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:33.961 08:26:25 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:13:33.961 08:26:25 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:13:33.962 true 00:13:34.219 08:26:26 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:34.219 08:26:26 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:13:34.477 08:26:26 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:13:34.477 08:26:26 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:13:34.477 08:26:26 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:13:34.734 08:26:26 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:34.734 08:26:26 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:13:34.993 08:26:27 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:13:34.993 08:26:27 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:13:34.993 08:26:27 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:13:35.251 08:26:27 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:35.251 08:26:27 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:13:35.509 08:26:27 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:13:35.509 08:26:27 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:13:35.509 08:26:27 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:35.510 08:26:27 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:13:35.769 08:26:27 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:13:35.769 08:26:27 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:13:35.769 08:26:27 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:13:36.027 08:26:27 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:13:36.027 08:26:27 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:36.027 08:26:28 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:13:36.027 08:26:28 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:13:36.027 08:26:28 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:13:36.286 08:26:28 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:36.286 08:26:28 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:13:36.850 08:26:28 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:13:36.850 08:26:28 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:13:36.850 08:26:28 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:13:36.850 08:26:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:13:36.850 08:26:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:13:36.850 08:26:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:13:36.850 08:26:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:13:36.850 08:26:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:13:36.850 08:26:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:13:36.850 08:26:28 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:13:36.850 08:26:28 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:13:36.850 08:26:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:13:36.850 08:26:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:13:36.850 08:26:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:13:36.850 08:26:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:13:36.850 08:26:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:13:36.850 08:26:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:13:36.850 08:26:28 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:13:36.850 08:26:28 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:13:36.850 08:26:28 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.uYzh5S3Jyz 00:13:36.850 08:26:28 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:13:36.850 08:26:28 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.pG0ejOWI81 00:13:36.850 08:26:28 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:13:36.850 08:26:28 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:13:36.850 08:26:28 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.uYzh5S3Jyz 00:13:36.850 08:26:28 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.pG0ejOWI81 00:13:36.850 08:26:28 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:13:37.108 08:26:29 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:13:37.365 [2024-07-15 08:26:29.357182] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:37.365 08:26:29 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.uYzh5S3Jyz 00:13:37.365 08:26:29 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.uYzh5S3Jyz 00:13:37.365 08:26:29 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:37.623 [2024-07-15 08:26:29.676765] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:37.623 08:26:29 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:37.881 08:26:29 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:13:38.140 [2024-07-15 08:26:30.156891] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:38.140 [2024-07-15 08:26:30.157208] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:38.140 08:26:30 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:38.398 malloc0 00:13:38.398 08:26:30 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:38.657 08:26:30 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.uYzh5S3Jyz 00:13:38.915 [2024-07-15 08:26:30.885923] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:13:38.915 08:26:30 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.uYzh5S3Jyz 00:13:51.189 Initializing NVMe Controllers 00:13:51.189 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:51.189 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:51.189 Initialization complete. Launching workers. 00:13:51.189 ======================================================== 00:13:51.189 Latency(us) 00:13:51.189 Device Information : IOPS MiB/s Average min max 00:13:51.189 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9481.99 37.04 6751.21 1682.90 11173.45 00:13:51.189 ======================================================== 00:13:51.189 Total : 9481.99 37.04 6751.21 1682.90 11173.45 00:13:51.189 00:13:51.189 08:26:41 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.uYzh5S3Jyz 00:13:51.189 08:26:41 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:51.189 08:26:41 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:51.189 08:26:41 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:51.189 08:26:41 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.uYzh5S3Jyz' 00:13:51.189 08:26:41 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:51.189 08:26:41 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:51.189 08:26:41 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73217 00:13:51.189 08:26:41 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:51.189 08:26:41 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73217 /var/tmp/bdevperf.sock 00:13:51.189 08:26:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73217 ']' 00:13:51.190 08:26:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:51.190 08:26:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:51.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:51.190 08:26:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:51.190 08:26:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:51.190 08:26:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:51.190 [2024-07-15 08:26:41.177383] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:13:51.190 [2024-07-15 08:26:41.177464] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73217 ] 00:13:51.190 [2024-07-15 08:26:41.316055] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:51.190 [2024-07-15 08:26:41.445781] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:51.190 [2024-07-15 08:26:41.502997] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:51.190 08:26:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:51.190 08:26:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:51.190 08:26:42 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.uYzh5S3Jyz 00:13:51.190 [2024-07-15 08:26:42.438913] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:51.190 [2024-07-15 08:26:42.439048] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:51.190 TLSTESTn1 00:13:51.190 08:26:42 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:13:51.190 Running I/O for 10 seconds... 00:14:01.183 00:14:01.183 Latency(us) 00:14:01.183 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:01.183 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:01.183 Verification LBA range: start 0x0 length 0x2000 00:14:01.183 TLSTESTn1 : 10.02 3740.01 14.61 0.00 0.00 34158.53 7864.32 45041.11 00:14:01.183 =================================================================================================================== 00:14:01.183 Total : 3740.01 14.61 0.00 0.00 34158.53 7864.32 45041.11 00:14:01.183 0 00:14:01.183 08:26:52 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:01.183 08:26:52 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 73217 00:14:01.183 08:26:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73217 ']' 00:14:01.183 08:26:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73217 00:14:01.183 08:26:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:01.183 08:26:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:01.183 08:26:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73217 00:14:01.183 killing process with pid 73217 00:14:01.183 Received shutdown signal, test time was about 10.000000 seconds 00:14:01.183 00:14:01.183 Latency(us) 00:14:01.183 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:01.183 =================================================================================================================== 00:14:01.183 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:01.183 08:26:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:01.183 08:26:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:01.183 08:26:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73217' 00:14:01.183 08:26:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73217 00:14:01.183 [2024-07-15 08:26:52.693498] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:01.183 08:26:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73217 00:14:01.183 08:26:52 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.pG0ejOWI81 00:14:01.183 08:26:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:14:01.183 08:26:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.pG0ejOWI81 00:14:01.183 08:26:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:14:01.183 08:26:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:01.183 08:26:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:14:01.183 08:26:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:01.183 08:26:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.pG0ejOWI81 00:14:01.183 08:26:52 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:01.183 08:26:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:01.183 08:26:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:01.183 08:26:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.pG0ejOWI81' 00:14:01.183 08:26:52 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:01.183 08:26:52 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73353 00:14:01.183 08:26:52 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:01.183 08:26:52 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:01.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:01.183 08:26:52 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73353 /var/tmp/bdevperf.sock 00:14:01.183 08:26:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73353 ']' 00:14:01.183 08:26:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:01.183 08:26:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:01.183 08:26:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:01.183 08:26:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:01.183 08:26:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:01.183 [2024-07-15 08:26:52.985176] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:14:01.184 [2024-07-15 08:26:52.985290] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73353 ] 00:14:01.184 [2024-07-15 08:26:53.119014] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:01.184 [2024-07-15 08:26:53.236798] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:01.184 [2024-07-15 08:26:53.290926] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:02.118 08:26:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:02.118 08:26:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:02.118 08:26:54 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.pG0ejOWI81 00:14:02.377 [2024-07-15 08:26:54.320645] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:02.377 [2024-07-15 08:26:54.320797] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:02.377 [2024-07-15 08:26:54.331999] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:02.377 [2024-07-15 08:26:54.332394] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ff1f0 (107): Transport endpoint is not connected 00:14:02.377 [2024-07-15 08:26:54.333383] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ff1f0 (9): Bad file descriptor 00:14:02.377 [2024-07-15 08:26:54.334379] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:14:02.377 [2024-07-15 08:26:54.334409] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:14:02.377 [2024-07-15 08:26:54.334426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:02.377 request: 00:14:02.377 { 00:14:02.377 "name": "TLSTEST", 00:14:02.377 "trtype": "tcp", 00:14:02.377 "traddr": "10.0.0.2", 00:14:02.377 "adrfam": "ipv4", 00:14:02.377 "trsvcid": "4420", 00:14:02.377 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:02.377 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:02.377 "prchk_reftag": false, 00:14:02.377 "prchk_guard": false, 00:14:02.377 "hdgst": false, 00:14:02.377 "ddgst": false, 00:14:02.377 "psk": "/tmp/tmp.pG0ejOWI81", 00:14:02.377 "method": "bdev_nvme_attach_controller", 00:14:02.377 "req_id": 1 00:14:02.377 } 00:14:02.377 Got JSON-RPC error response 00:14:02.377 response: 00:14:02.377 { 00:14:02.377 "code": -5, 00:14:02.377 "message": "Input/output error" 00:14:02.377 } 00:14:02.377 08:26:54 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 73353 00:14:02.377 08:26:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73353 ']' 00:14:02.377 08:26:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73353 00:14:02.377 08:26:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:02.377 08:26:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:02.377 08:26:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73353 00:14:02.377 killing process with pid 73353 00:14:02.377 Received shutdown signal, test time was about 10.000000 seconds 00:14:02.377 00:14:02.377 Latency(us) 00:14:02.377 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:02.377 =================================================================================================================== 00:14:02.377 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:02.377 08:26:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:02.377 08:26:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:02.377 08:26:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73353' 00:14:02.377 08:26:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73353 00:14:02.377 08:26:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73353 00:14:02.377 [2024-07-15 08:26:54.381293] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:02.636 08:26:54 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:14:02.636 08:26:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:14:02.636 08:26:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:02.636 08:26:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:02.636 08:26:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:02.636 08:26:54 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.uYzh5S3Jyz 00:14:02.636 08:26:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:14:02.636 08:26:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.uYzh5S3Jyz 00:14:02.636 08:26:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:14:02.636 08:26:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:02.636 08:26:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:14:02.636 08:26:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:02.636 08:26:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.uYzh5S3Jyz 00:14:02.636 08:26:54 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:02.636 08:26:54 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:02.636 08:26:54 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:14:02.636 08:26:54 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.uYzh5S3Jyz' 00:14:02.636 08:26:54 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:02.636 08:26:54 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73375 00:14:02.636 08:26:54 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:02.636 08:26:54 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:02.636 08:26:54 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73375 /var/tmp/bdevperf.sock 00:14:02.636 08:26:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73375 ']' 00:14:02.636 08:26:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:02.636 08:26:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:02.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:02.636 08:26:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:02.636 08:26:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:02.636 08:26:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:02.636 [2024-07-15 08:26:54.657485] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:14:02.636 [2024-07-15 08:26:54.658304] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73375 ] 00:14:02.636 [2024-07-15 08:26:54.799403] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:02.895 [2024-07-15 08:26:54.934129] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:02.895 [2024-07-15 08:26:54.991751] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:03.829 08:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:03.829 08:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:03.829 08:26:55 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.uYzh5S3Jyz 00:14:03.829 [2024-07-15 08:26:55.914532] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:03.829 [2024-07-15 08:26:55.914661] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:03.829 [2024-07-15 08:26:55.922125] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:14:03.829 [2024-07-15 08:26:55.922169] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:14:03.829 [2024-07-15 08:26:55.922223] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:03.829 [2024-07-15 08:26:55.923208] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd281f0 (107): Transport endpoint is not connected 00:14:03.829 [2024-07-15 08:26:55.924198] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd281f0 (9): Bad file descriptor 00:14:03.829 [2024-07-15 08:26:55.925194] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:14:03.829 [2024-07-15 08:26:55.925218] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:14:03.829 [2024-07-15 08:26:55.925233] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:03.829 request: 00:14:03.829 { 00:14:03.829 "name": "TLSTEST", 00:14:03.829 "trtype": "tcp", 00:14:03.829 "traddr": "10.0.0.2", 00:14:03.829 "adrfam": "ipv4", 00:14:03.829 "trsvcid": "4420", 00:14:03.829 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:03.829 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:14:03.829 "prchk_reftag": false, 00:14:03.829 "prchk_guard": false, 00:14:03.829 "hdgst": false, 00:14:03.829 "ddgst": false, 00:14:03.829 "psk": "/tmp/tmp.uYzh5S3Jyz", 00:14:03.829 "method": "bdev_nvme_attach_controller", 00:14:03.829 "req_id": 1 00:14:03.829 } 00:14:03.829 Got JSON-RPC error response 00:14:03.829 response: 00:14:03.829 { 00:14:03.829 "code": -5, 00:14:03.829 "message": "Input/output error" 00:14:03.829 } 00:14:03.829 08:26:55 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 73375 00:14:03.829 08:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73375 ']' 00:14:03.829 08:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73375 00:14:03.829 08:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:03.829 08:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:03.829 08:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73375 00:14:03.829 killing process with pid 73375 00:14:03.829 Received shutdown signal, test time was about 10.000000 seconds 00:14:03.829 00:14:03.829 Latency(us) 00:14:03.829 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:03.829 =================================================================================================================== 00:14:03.829 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:03.829 08:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:03.829 08:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:03.829 08:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73375' 00:14:03.829 08:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73375 00:14:03.829 [2024-07-15 08:26:55.968121] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:03.829 08:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73375 00:14:04.087 08:26:56 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:14:04.087 08:26:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:14:04.087 08:26:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:04.087 08:26:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:04.087 08:26:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:04.087 08:26:56 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.uYzh5S3Jyz 00:14:04.087 08:26:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:14:04.087 08:26:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.uYzh5S3Jyz 00:14:04.087 08:26:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:14:04.087 08:26:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:04.087 08:26:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:14:04.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:04.087 08:26:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:04.087 08:26:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.uYzh5S3Jyz 00:14:04.087 08:26:56 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:04.087 08:26:56 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:14:04.087 08:26:56 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:04.087 08:26:56 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.uYzh5S3Jyz' 00:14:04.087 08:26:56 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:04.087 08:26:56 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73403 00:14:04.087 08:26:56 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:04.087 08:26:56 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:04.087 08:26:56 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73403 /var/tmp/bdevperf.sock 00:14:04.087 08:26:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73403 ']' 00:14:04.087 08:26:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:04.087 08:26:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:04.087 08:26:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:04.087 08:26:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:04.087 08:26:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:04.087 [2024-07-15 08:26:56.250814] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:14:04.087 [2024-07-15 08:26:56.250937] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73403 ] 00:14:04.345 [2024-07-15 08:26:56.391913] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:04.603 [2024-07-15 08:26:56.524255] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:04.603 [2024-07-15 08:26:56.581599] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:05.537 08:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:05.537 08:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:05.537 08:26:57 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.uYzh5S3Jyz 00:14:05.537 [2024-07-15 08:26:57.621085] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:05.537 [2024-07-15 08:26:57.621229] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:05.538 [2024-07-15 08:26:57.632615] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:14:05.538 [2024-07-15 08:26:57.632667] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:14:05.538 [2024-07-15 08:26:57.632740] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:05.538 [2024-07-15 08:26:57.632742] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b71f0 (107): Transport endpoint is not connected 00:14:05.538 [2024-07-15 08:26:57.633743] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b71f0 (9): Bad file descriptor 00:14:05.538 [2024-07-15 08:26:57.634728] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:14:05.538 [2024-07-15 08:26:57.634750] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:14:05.538 [2024-07-15 08:26:57.634765] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:14:05.538 request: 00:14:05.538 { 00:14:05.538 "name": "TLSTEST", 00:14:05.538 "trtype": "tcp", 00:14:05.538 "traddr": "10.0.0.2", 00:14:05.538 "adrfam": "ipv4", 00:14:05.538 "trsvcid": "4420", 00:14:05.538 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:14:05.538 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:05.538 "prchk_reftag": false, 00:14:05.538 "prchk_guard": false, 00:14:05.538 "hdgst": false, 00:14:05.538 "ddgst": false, 00:14:05.538 "psk": "/tmp/tmp.uYzh5S3Jyz", 00:14:05.538 "method": "bdev_nvme_attach_controller", 00:14:05.538 "req_id": 1 00:14:05.538 } 00:14:05.538 Got JSON-RPC error response 00:14:05.538 response: 00:14:05.538 { 00:14:05.538 "code": -5, 00:14:05.538 "message": "Input/output error" 00:14:05.538 } 00:14:05.538 08:26:57 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 73403 00:14:05.538 08:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73403 ']' 00:14:05.538 08:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73403 00:14:05.538 08:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:05.538 08:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:05.538 08:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73403 00:14:05.538 killing process with pid 73403 00:14:05.538 Received shutdown signal, test time was about 10.000000 seconds 00:14:05.538 00:14:05.538 Latency(us) 00:14:05.538 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:05.538 =================================================================================================================== 00:14:05.538 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:05.538 08:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:05.538 08:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:05.538 08:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73403' 00:14:05.538 08:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73403 00:14:05.538 [2024-07-15 08:26:57.683711] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:05.538 08:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73403 00:14:05.795 08:26:57 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:14:05.795 08:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:14:05.795 08:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:05.795 08:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:05.795 08:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:05.795 08:26:57 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:05.795 08:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:14:05.795 08:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:05.795 08:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:14:05.795 08:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:05.795 08:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:14:05.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:05.795 08:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:05.795 08:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:05.795 08:26:57 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:05.795 08:26:57 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:05.795 08:26:57 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:05.795 08:26:57 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:14:05.795 08:26:57 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:05.795 08:26:57 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73436 00:14:05.795 08:26:57 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:05.795 08:26:57 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:05.795 08:26:57 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73436 /var/tmp/bdevperf.sock 00:14:05.795 08:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73436 ']' 00:14:05.795 08:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:05.795 08:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:05.795 08:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:05.795 08:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:05.795 08:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:05.795 [2024-07-15 08:26:57.959787] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:14:05.795 [2024-07-15 08:26:57.959886] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73436 ] 00:14:06.053 [2024-07-15 08:26:58.099275] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:06.053 [2024-07-15 08:26:58.214289] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:06.311 [2024-07-15 08:26:58.268465] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:06.876 08:26:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:06.876 08:26:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:06.876 08:26:58 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:07.133 [2024-07-15 08:26:59.202914] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:07.133 [2024-07-15 08:26:59.204868] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17fcc00 (9): Bad file descriptor 00:14:07.133 [2024-07-15 08:26:59.205864] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:14:07.133 [2024-07-15 08:26:59.205889] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:14:07.133 [2024-07-15 08:26:59.205903] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:07.133 request: 00:14:07.133 { 00:14:07.133 "name": "TLSTEST", 00:14:07.133 "trtype": "tcp", 00:14:07.133 "traddr": "10.0.0.2", 00:14:07.133 "adrfam": "ipv4", 00:14:07.133 "trsvcid": "4420", 00:14:07.133 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:07.133 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:07.133 "prchk_reftag": false, 00:14:07.133 "prchk_guard": false, 00:14:07.133 "hdgst": false, 00:14:07.133 "ddgst": false, 00:14:07.133 "method": "bdev_nvme_attach_controller", 00:14:07.133 "req_id": 1 00:14:07.133 } 00:14:07.133 Got JSON-RPC error response 00:14:07.133 response: 00:14:07.133 { 00:14:07.133 "code": -5, 00:14:07.133 "message": "Input/output error" 00:14:07.133 } 00:14:07.133 08:26:59 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 73436 00:14:07.133 08:26:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73436 ']' 00:14:07.133 08:26:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73436 00:14:07.133 08:26:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:07.133 08:26:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:07.133 08:26:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73436 00:14:07.133 killing process with pid 73436 00:14:07.133 Received shutdown signal, test time was about 10.000000 seconds 00:14:07.133 00:14:07.133 Latency(us) 00:14:07.133 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:07.133 =================================================================================================================== 00:14:07.133 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:07.133 08:26:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:07.133 08:26:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:07.133 08:26:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73436' 00:14:07.133 08:26:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73436 00:14:07.133 08:26:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73436 00:14:07.391 08:26:59 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:14:07.391 08:26:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:14:07.391 08:26:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:07.391 08:26:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:07.391 08:26:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:07.391 08:26:59 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 72981 00:14:07.391 08:26:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 72981 ']' 00:14:07.391 08:26:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 72981 00:14:07.391 08:26:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:07.391 08:26:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:07.391 08:26:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72981 00:14:07.391 killing process with pid 72981 00:14:07.391 08:26:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:07.391 08:26:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:07.391 08:26:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72981' 00:14:07.391 08:26:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 72981 00:14:07.391 [2024-07-15 08:26:59.491511] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:07.391 08:26:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 72981 00:14:07.649 08:26:59 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:14:07.649 08:26:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:14:07.649 08:26:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:14:07.649 08:26:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:14:07.649 08:26:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:14:07.649 08:26:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:14:07.649 08:26:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:14:07.649 08:26:59 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:14:07.649 08:26:59 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:14:07.649 08:26:59 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.nNGlligVhF 00:14:07.649 08:26:59 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:14:07.649 08:26:59 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.nNGlligVhF 00:14:07.649 08:26:59 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:14:07.649 08:26:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:07.649 08:26:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:07.649 08:26:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:07.649 08:26:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73472 00:14:07.649 08:26:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:07.649 08:26:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73472 00:14:07.649 08:26:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73472 ']' 00:14:07.649 08:26:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:07.649 08:26:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:07.649 08:26:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:07.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:07.649 08:26:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:07.649 08:26:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:07.906 [2024-07-15 08:26:59.833290] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:14:07.906 [2024-07-15 08:26:59.833419] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:07.906 [2024-07-15 08:26:59.976233] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:08.163 [2024-07-15 08:27:00.099944] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:08.163 [2024-07-15 08:27:00.100009] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:08.163 [2024-07-15 08:27:00.100023] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:08.163 [2024-07-15 08:27:00.100034] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:08.163 [2024-07-15 08:27:00.100043] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:08.163 [2024-07-15 08:27:00.100080] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:08.163 [2024-07-15 08:27:00.156255] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:08.728 08:27:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:08.728 08:27:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:08.728 08:27:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:08.728 08:27:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:08.728 08:27:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:08.728 08:27:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:08.728 08:27:00 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.nNGlligVhF 00:14:08.728 08:27:00 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.nNGlligVhF 00:14:08.728 08:27:00 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:09.292 [2024-07-15 08:27:01.160512] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:09.292 08:27:01 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:09.292 08:27:01 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:09.549 [2024-07-15 08:27:01.684596] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:09.549 [2024-07-15 08:27:01.684922] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:09.549 08:27:01 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:09.807 malloc0 00:14:09.807 08:27:01 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:10.107 08:27:02 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.nNGlligVhF 00:14:10.365 [2024-07-15 08:27:02.481005] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:10.365 08:27:02 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.nNGlligVhF 00:14:10.365 08:27:02 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:10.365 08:27:02 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:10.365 08:27:02 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:10.365 08:27:02 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.nNGlligVhF' 00:14:10.365 08:27:02 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:10.365 08:27:02 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73532 00:14:10.365 08:27:02 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:10.365 08:27:02 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:10.365 08:27:02 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73532 /var/tmp/bdevperf.sock 00:14:10.365 08:27:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73532 ']' 00:14:10.365 08:27:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:10.365 08:27:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:10.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:10.365 08:27:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:10.365 08:27:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:10.365 08:27:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:10.623 [2024-07-15 08:27:02.553457] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:14:10.623 [2024-07-15 08:27:02.553567] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73532 ] 00:14:10.623 [2024-07-15 08:27:02.688052] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:10.881 [2024-07-15 08:27:02.815684] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:10.881 [2024-07-15 08:27:02.875073] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:11.448 08:27:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:11.448 08:27:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:11.448 08:27:03 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.nNGlligVhF 00:14:11.706 [2024-07-15 08:27:03.796137] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:11.706 [2024-07-15 08:27:03.796279] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:11.706 TLSTESTn1 00:14:11.964 08:27:03 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:11.964 Running I/O for 10 seconds... 00:14:22.008 00:14:22.008 Latency(us) 00:14:22.008 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:22.008 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:22.008 Verification LBA range: start 0x0 length 0x2000 00:14:22.008 TLSTESTn1 : 10.03 3624.87 14.16 0.00 0.00 35236.68 10009.13 34078.72 00:14:22.009 =================================================================================================================== 00:14:22.009 Total : 3624.87 14.16 0.00 0.00 35236.68 10009.13 34078.72 00:14:22.009 0 00:14:22.009 08:27:14 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:22.009 08:27:14 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 73532 00:14:22.009 08:27:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73532 ']' 00:14:22.009 08:27:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73532 00:14:22.009 08:27:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:22.009 08:27:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:22.009 08:27:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73532 00:14:22.009 08:27:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:22.009 08:27:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:22.009 killing process with pid 73532 00:14:22.009 08:27:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73532' 00:14:22.009 Received shutdown signal, test time was about 10.000000 seconds 00:14:22.009 00:14:22.009 Latency(us) 00:14:22.009 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:22.009 =================================================================================================================== 00:14:22.009 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:22.009 08:27:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73532 00:14:22.009 [2024-07-15 08:27:14.045585] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:22.009 08:27:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73532 00:14:22.266 08:27:14 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.nNGlligVhF 00:14:22.266 08:27:14 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.nNGlligVhF 00:14:22.266 08:27:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:14:22.266 08:27:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.nNGlligVhF 00:14:22.266 08:27:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:14:22.266 08:27:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:22.266 08:27:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:14:22.266 08:27:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:22.266 08:27:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.nNGlligVhF 00:14:22.266 08:27:14 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:22.266 08:27:14 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:22.266 08:27:14 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:22.266 08:27:14 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.nNGlligVhF' 00:14:22.266 08:27:14 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:22.266 08:27:14 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73667 00:14:22.266 08:27:14 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:22.266 08:27:14 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:22.266 08:27:14 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73667 /var/tmp/bdevperf.sock 00:14:22.266 08:27:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73667 ']' 00:14:22.266 08:27:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:22.266 08:27:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:22.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:22.266 08:27:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:22.266 08:27:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:22.266 08:27:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:22.266 [2024-07-15 08:27:14.332323] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:14:22.266 [2024-07-15 08:27:14.332457] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73667 ] 00:14:22.524 [2024-07-15 08:27:14.473835] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:22.524 [2024-07-15 08:27:14.579640] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:22.524 [2024-07-15 08:27:14.631518] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:23.456 08:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:23.456 08:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:23.456 08:27:15 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.nNGlligVhF 00:14:23.456 [2024-07-15 08:27:15.507623] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:23.456 [2024-07-15 08:27:15.507712] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:14:23.456 [2024-07-15 08:27:15.507736] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.nNGlligVhF 00:14:23.456 request: 00:14:23.456 { 00:14:23.456 "name": "TLSTEST", 00:14:23.456 "trtype": "tcp", 00:14:23.456 "traddr": "10.0.0.2", 00:14:23.456 "adrfam": "ipv4", 00:14:23.456 "trsvcid": "4420", 00:14:23.456 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:23.456 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:23.456 "prchk_reftag": false, 00:14:23.456 "prchk_guard": false, 00:14:23.456 "hdgst": false, 00:14:23.456 "ddgst": false, 00:14:23.456 "psk": "/tmp/tmp.nNGlligVhF", 00:14:23.456 "method": "bdev_nvme_attach_controller", 00:14:23.456 "req_id": 1 00:14:23.456 } 00:14:23.456 Got JSON-RPC error response 00:14:23.456 response: 00:14:23.456 { 00:14:23.456 "code": -1, 00:14:23.456 "message": "Operation not permitted" 00:14:23.456 } 00:14:23.456 08:27:15 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 73667 00:14:23.457 08:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73667 ']' 00:14:23.457 08:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73667 00:14:23.457 08:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:23.457 08:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:23.457 08:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73667 00:14:23.457 killing process with pid 73667 00:14:23.457 Received shutdown signal, test time was about 10.000000 seconds 00:14:23.457 00:14:23.457 Latency(us) 00:14:23.457 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:23.457 =================================================================================================================== 00:14:23.457 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:23.457 08:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:23.457 08:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:23.457 08:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73667' 00:14:23.457 08:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73667 00:14:23.457 08:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73667 00:14:23.714 08:27:15 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:14:23.714 08:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:14:23.714 08:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:23.714 08:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:23.714 08:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:23.714 08:27:15 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 73472 00:14:23.714 08:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73472 ']' 00:14:23.714 08:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73472 00:14:23.714 08:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:23.714 08:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:23.714 08:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73472 00:14:23.714 killing process with pid 73472 00:14:23.714 08:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:23.714 08:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:23.714 08:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73472' 00:14:23.714 08:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73472 00:14:23.714 [2024-07-15 08:27:15.792102] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:23.714 08:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73472 00:14:23.971 08:27:16 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:14:23.971 08:27:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:23.971 08:27:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:23.971 08:27:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:23.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:23.971 08:27:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73698 00:14:23.971 08:27:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73698 00:14:23.971 08:27:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:23.971 08:27:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73698 ']' 00:14:23.971 08:27:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:23.971 08:27:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:23.971 08:27:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:23.971 08:27:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:23.971 08:27:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:23.971 [2024-07-15 08:27:16.101442] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:14:23.972 [2024-07-15 08:27:16.101580] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:24.230 [2024-07-15 08:27:16.245622] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:24.230 [2024-07-15 08:27:16.361037] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:24.230 [2024-07-15 08:27:16.361102] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:24.230 [2024-07-15 08:27:16.361115] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:24.230 [2024-07-15 08:27:16.361124] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:24.230 [2024-07-15 08:27:16.361132] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:24.230 [2024-07-15 08:27:16.361158] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:24.487 [2024-07-15 08:27:16.414496] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:25.054 08:27:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:25.054 08:27:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:25.054 08:27:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:25.054 08:27:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:25.054 08:27:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:25.054 08:27:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:25.054 08:27:17 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.nNGlligVhF 00:14:25.054 08:27:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:14:25.054 08:27:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.nNGlligVhF 00:14:25.054 08:27:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:14:25.054 08:27:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:25.054 08:27:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:14:25.054 08:27:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:25.054 08:27:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.nNGlligVhF 00:14:25.054 08:27:17 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.nNGlligVhF 00:14:25.054 08:27:17 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:25.312 [2024-07-15 08:27:17.250188] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:25.312 08:27:17 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:25.571 08:27:17 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:25.829 [2024-07-15 08:27:17.754297] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:25.829 [2024-07-15 08:27:17.754532] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:25.829 08:27:17 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:26.087 malloc0 00:14:26.087 08:27:18 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:26.361 08:27:18 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.nNGlligVhF 00:14:26.636 [2024-07-15 08:27:18.533872] tcp.c:3589:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:14:26.636 [2024-07-15 08:27:18.533932] tcp.c:3675:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:14:26.636 [2024-07-15 08:27:18.533968] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:14:26.636 request: 00:14:26.636 { 00:14:26.636 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:26.636 "host": "nqn.2016-06.io.spdk:host1", 00:14:26.636 "psk": "/tmp/tmp.nNGlligVhF", 00:14:26.636 "method": "nvmf_subsystem_add_host", 00:14:26.636 "req_id": 1 00:14:26.636 } 00:14:26.636 Got JSON-RPC error response 00:14:26.636 response: 00:14:26.636 { 00:14:26.636 "code": -32603, 00:14:26.636 "message": "Internal error" 00:14:26.636 } 00:14:26.636 08:27:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:14:26.636 08:27:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:26.636 08:27:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:26.636 08:27:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:26.636 08:27:18 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 73698 00:14:26.636 08:27:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73698 ']' 00:14:26.636 08:27:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73698 00:14:26.636 08:27:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:26.636 08:27:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:26.636 08:27:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73698 00:14:26.636 killing process with pid 73698 00:14:26.636 08:27:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:26.636 08:27:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:26.636 08:27:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73698' 00:14:26.636 08:27:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73698 00:14:26.636 08:27:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73698 00:14:26.896 08:27:18 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.nNGlligVhF 00:14:26.896 08:27:18 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:14:26.896 08:27:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:26.896 08:27:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:26.896 08:27:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:26.896 08:27:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73762 00:14:26.896 08:27:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73762 00:14:26.896 08:27:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73762 ']' 00:14:26.896 08:27:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:26.896 08:27:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:26.896 08:27:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:26.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:26.896 08:27:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:26.896 08:27:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:26.896 08:27:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:26.896 [2024-07-15 08:27:18.881559] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:14:26.896 [2024-07-15 08:27:18.881649] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:26.896 [2024-07-15 08:27:19.017179] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:27.154 [2024-07-15 08:27:19.137166] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:27.154 [2024-07-15 08:27:19.137231] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:27.154 [2024-07-15 08:27:19.137245] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:27.154 [2024-07-15 08:27:19.137253] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:27.154 [2024-07-15 08:27:19.137261] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:27.154 [2024-07-15 08:27:19.137287] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:27.154 [2024-07-15 08:27:19.192309] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:27.720 08:27:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:27.720 08:27:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:27.720 08:27:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:27.720 08:27:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:27.720 08:27:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:27.978 08:27:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:27.978 08:27:19 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.nNGlligVhF 00:14:27.978 08:27:19 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.nNGlligVhF 00:14:27.978 08:27:19 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:28.236 [2024-07-15 08:27:20.164602] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:28.236 08:27:20 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:28.493 08:27:20 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:28.750 [2024-07-15 08:27:20.696707] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:28.750 [2024-07-15 08:27:20.696953] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:28.750 08:27:20 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:29.007 malloc0 00:14:29.007 08:27:20 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:29.265 08:27:21 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.nNGlligVhF 00:14:29.522 [2024-07-15 08:27:21.504280] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:29.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:29.522 08:27:21 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=73817 00:14:29.522 08:27:21 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:29.522 08:27:21 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:29.522 08:27:21 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 73817 /var/tmp/bdevperf.sock 00:14:29.522 08:27:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73817 ']' 00:14:29.522 08:27:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:29.522 08:27:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:29.522 08:27:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:29.522 08:27:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:29.522 08:27:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:29.522 [2024-07-15 08:27:21.570317] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:14:29.522 [2024-07-15 08:27:21.571442] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73817 ] 00:14:29.780 [2024-07-15 08:27:21.712205] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:29.780 [2024-07-15 08:27:21.839650] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:29.780 [2024-07-15 08:27:21.895484] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:30.710 08:27:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:30.710 08:27:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:30.710 08:27:22 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.nNGlligVhF 00:14:30.710 [2024-07-15 08:27:22.783222] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:30.710 [2024-07-15 08:27:22.783355] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:30.710 TLSTESTn1 00:14:30.710 08:27:22 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:14:31.283 08:27:23 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:14:31.283 "subsystems": [ 00:14:31.283 { 00:14:31.283 "subsystem": "keyring", 00:14:31.283 "config": [] 00:14:31.283 }, 00:14:31.283 { 00:14:31.283 "subsystem": "iobuf", 00:14:31.283 "config": [ 00:14:31.283 { 00:14:31.283 "method": "iobuf_set_options", 00:14:31.283 "params": { 00:14:31.283 "small_pool_count": 8192, 00:14:31.283 "large_pool_count": 1024, 00:14:31.283 "small_bufsize": 8192, 00:14:31.283 "large_bufsize": 135168 00:14:31.283 } 00:14:31.283 } 00:14:31.283 ] 00:14:31.283 }, 00:14:31.283 { 00:14:31.283 "subsystem": "sock", 00:14:31.283 "config": [ 00:14:31.283 { 00:14:31.283 "method": "sock_set_default_impl", 00:14:31.283 "params": { 00:14:31.283 "impl_name": "uring" 00:14:31.283 } 00:14:31.283 }, 00:14:31.283 { 00:14:31.283 "method": "sock_impl_set_options", 00:14:31.283 "params": { 00:14:31.283 "impl_name": "ssl", 00:14:31.283 "recv_buf_size": 4096, 00:14:31.283 "send_buf_size": 4096, 00:14:31.283 "enable_recv_pipe": true, 00:14:31.283 "enable_quickack": false, 00:14:31.283 "enable_placement_id": 0, 00:14:31.283 "enable_zerocopy_send_server": true, 00:14:31.283 "enable_zerocopy_send_client": false, 00:14:31.283 "zerocopy_threshold": 0, 00:14:31.283 "tls_version": 0, 00:14:31.283 "enable_ktls": false 00:14:31.283 } 00:14:31.283 }, 00:14:31.283 { 00:14:31.283 "method": "sock_impl_set_options", 00:14:31.283 "params": { 00:14:31.283 "impl_name": "posix", 00:14:31.283 "recv_buf_size": 2097152, 00:14:31.283 "send_buf_size": 2097152, 00:14:31.283 "enable_recv_pipe": true, 00:14:31.283 "enable_quickack": false, 00:14:31.283 "enable_placement_id": 0, 00:14:31.283 "enable_zerocopy_send_server": true, 00:14:31.283 "enable_zerocopy_send_client": false, 00:14:31.283 "zerocopy_threshold": 0, 00:14:31.284 "tls_version": 0, 00:14:31.284 "enable_ktls": false 00:14:31.284 } 00:14:31.284 }, 00:14:31.284 { 00:14:31.284 "method": "sock_impl_set_options", 00:14:31.284 "params": { 00:14:31.284 "impl_name": "uring", 00:14:31.284 "recv_buf_size": 2097152, 00:14:31.284 "send_buf_size": 2097152, 00:14:31.284 "enable_recv_pipe": true, 00:14:31.284 "enable_quickack": false, 00:14:31.284 "enable_placement_id": 0, 00:14:31.284 "enable_zerocopy_send_server": false, 00:14:31.284 "enable_zerocopy_send_client": false, 00:14:31.284 "zerocopy_threshold": 0, 00:14:31.284 "tls_version": 0, 00:14:31.284 "enable_ktls": false 00:14:31.284 } 00:14:31.284 } 00:14:31.284 ] 00:14:31.284 }, 00:14:31.284 { 00:14:31.284 "subsystem": "vmd", 00:14:31.284 "config": [] 00:14:31.284 }, 00:14:31.284 { 00:14:31.284 "subsystem": "accel", 00:14:31.284 "config": [ 00:14:31.284 { 00:14:31.284 "method": "accel_set_options", 00:14:31.284 "params": { 00:14:31.284 "small_cache_size": 128, 00:14:31.284 "large_cache_size": 16, 00:14:31.284 "task_count": 2048, 00:14:31.284 "sequence_count": 2048, 00:14:31.284 "buf_count": 2048 00:14:31.284 } 00:14:31.284 } 00:14:31.284 ] 00:14:31.284 }, 00:14:31.284 { 00:14:31.284 "subsystem": "bdev", 00:14:31.284 "config": [ 00:14:31.284 { 00:14:31.284 "method": "bdev_set_options", 00:14:31.284 "params": { 00:14:31.284 "bdev_io_pool_size": 65535, 00:14:31.284 "bdev_io_cache_size": 256, 00:14:31.284 "bdev_auto_examine": true, 00:14:31.284 "iobuf_small_cache_size": 128, 00:14:31.284 "iobuf_large_cache_size": 16 00:14:31.284 } 00:14:31.284 }, 00:14:31.284 { 00:14:31.284 "method": "bdev_raid_set_options", 00:14:31.284 "params": { 00:14:31.284 "process_window_size_kb": 1024 00:14:31.284 } 00:14:31.284 }, 00:14:31.284 { 00:14:31.284 "method": "bdev_iscsi_set_options", 00:14:31.284 "params": { 00:14:31.284 "timeout_sec": 30 00:14:31.284 } 00:14:31.284 }, 00:14:31.284 { 00:14:31.284 "method": "bdev_nvme_set_options", 00:14:31.284 "params": { 00:14:31.284 "action_on_timeout": "none", 00:14:31.284 "timeout_us": 0, 00:14:31.284 "timeout_admin_us": 0, 00:14:31.284 "keep_alive_timeout_ms": 10000, 00:14:31.284 "arbitration_burst": 0, 00:14:31.285 "low_priority_weight": 0, 00:14:31.285 "medium_priority_weight": 0, 00:14:31.285 "high_priority_weight": 0, 00:14:31.285 "nvme_adminq_poll_period_us": 10000, 00:14:31.285 "nvme_ioq_poll_period_us": 0, 00:14:31.285 "io_queue_requests": 0, 00:14:31.285 "delay_cmd_submit": true, 00:14:31.285 "transport_retry_count": 4, 00:14:31.285 "bdev_retry_count": 3, 00:14:31.285 "transport_ack_timeout": 0, 00:14:31.285 "ctrlr_loss_timeout_sec": 0, 00:14:31.285 "reconnect_delay_sec": 0, 00:14:31.285 "fast_io_fail_timeout_sec": 0, 00:14:31.285 "disable_auto_failback": false, 00:14:31.285 "generate_uuids": false, 00:14:31.285 "transport_tos": 0, 00:14:31.285 "nvme_error_stat": false, 00:14:31.285 "rdma_srq_size": 0, 00:14:31.285 "io_path_stat": false, 00:14:31.285 "allow_accel_sequence": false, 00:14:31.285 "rdma_max_cq_size": 0, 00:14:31.285 "rdma_cm_event_timeout_ms": 0, 00:14:31.285 "dhchap_digests": [ 00:14:31.285 "sha256", 00:14:31.285 "sha384", 00:14:31.285 "sha512" 00:14:31.285 ], 00:14:31.285 "dhchap_dhgroups": [ 00:14:31.285 "null", 00:14:31.285 "ffdhe2048", 00:14:31.285 "ffdhe3072", 00:14:31.285 "ffdhe4096", 00:14:31.285 "ffdhe6144", 00:14:31.285 "ffdhe8192" 00:14:31.285 ] 00:14:31.285 } 00:14:31.285 }, 00:14:31.285 { 00:14:31.285 "method": "bdev_nvme_set_hotplug", 00:14:31.285 "params": { 00:14:31.285 "period_us": 100000, 00:14:31.285 "enable": false 00:14:31.285 } 00:14:31.285 }, 00:14:31.285 { 00:14:31.285 "method": "bdev_malloc_create", 00:14:31.285 "params": { 00:14:31.285 "name": "malloc0", 00:14:31.285 "num_blocks": 8192, 00:14:31.285 "block_size": 4096, 00:14:31.285 "physical_block_size": 4096, 00:14:31.285 "uuid": "d45c572a-ab3e-4bca-87df-9efca31c2885", 00:14:31.285 "optimal_io_boundary": 0 00:14:31.285 } 00:14:31.285 }, 00:14:31.285 { 00:14:31.285 "method": "bdev_wait_for_examine" 00:14:31.285 } 00:14:31.285 ] 00:14:31.285 }, 00:14:31.285 { 00:14:31.285 "subsystem": "nbd", 00:14:31.285 "config": [] 00:14:31.285 }, 00:14:31.285 { 00:14:31.285 "subsystem": "scheduler", 00:14:31.285 "config": [ 00:14:31.285 { 00:14:31.285 "method": "framework_set_scheduler", 00:14:31.285 "params": { 00:14:31.285 "name": "static" 00:14:31.285 } 00:14:31.285 } 00:14:31.285 ] 00:14:31.285 }, 00:14:31.285 { 00:14:31.285 "subsystem": "nvmf", 00:14:31.285 "config": [ 00:14:31.285 { 00:14:31.285 "method": "nvmf_set_config", 00:14:31.285 "params": { 00:14:31.285 "discovery_filter": "match_any", 00:14:31.285 "admin_cmd_passthru": { 00:14:31.285 "identify_ctrlr": false 00:14:31.285 } 00:14:31.285 } 00:14:31.285 }, 00:14:31.285 { 00:14:31.285 "method": "nvmf_set_max_subsystems", 00:14:31.285 "params": { 00:14:31.285 "max_subsystems": 1024 00:14:31.286 } 00:14:31.286 }, 00:14:31.286 { 00:14:31.286 "method": "nvmf_set_crdt", 00:14:31.286 "params": { 00:14:31.286 "crdt1": 0, 00:14:31.286 "crdt2": 0, 00:14:31.286 "crdt3": 0 00:14:31.286 } 00:14:31.286 }, 00:14:31.286 { 00:14:31.286 "method": "nvmf_create_transport", 00:14:31.286 "params": { 00:14:31.286 "trtype": "TCP", 00:14:31.286 "max_queue_depth": 128, 00:14:31.286 "max_io_qpairs_per_ctrlr": 127, 00:14:31.286 "in_capsule_data_size": 4096, 00:14:31.286 "max_io_size": 131072, 00:14:31.286 "io_unit_size": 131072, 00:14:31.286 "max_aq_depth": 128, 00:14:31.286 "num_shared_buffers": 511, 00:14:31.286 "buf_cache_size": 4294967295, 00:14:31.286 "dif_insert_or_strip": false, 00:14:31.286 "zcopy": false, 00:14:31.286 "c2h_success": false, 00:14:31.286 "sock_priority": 0, 00:14:31.286 "abort_timeout_sec": 1, 00:14:31.286 "ack_timeout": 0, 00:14:31.286 "data_wr_pool_size": 0 00:14:31.286 } 00:14:31.286 }, 00:14:31.286 { 00:14:31.286 "method": "nvmf_create_subsystem", 00:14:31.286 "params": { 00:14:31.286 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:31.286 "allow_any_host": false, 00:14:31.286 "serial_number": "SPDK00000000000001", 00:14:31.286 "model_number": "SPDK bdev Controller", 00:14:31.286 "max_namespaces": 10, 00:14:31.286 "min_cntlid": 1, 00:14:31.286 "max_cntlid": 65519, 00:14:31.286 "ana_reporting": false 00:14:31.286 } 00:14:31.286 }, 00:14:31.286 { 00:14:31.286 "method": "nvmf_subsystem_add_host", 00:14:31.286 "params": { 00:14:31.286 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:31.286 "host": "nqn.2016-06.io.spdk:host1", 00:14:31.286 "psk": "/tmp/tmp.nNGlligVhF" 00:14:31.286 } 00:14:31.286 }, 00:14:31.286 { 00:14:31.286 "method": "nvmf_subsystem_add_ns", 00:14:31.286 "params": { 00:14:31.286 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:31.286 "namespace": { 00:14:31.286 "nsid": 1, 00:14:31.286 "bdev_name": "malloc0", 00:14:31.286 "nguid": "D45C572AAB3E4BCA87DF9EFCA31C2885", 00:14:31.286 "uuid": "d45c572a-ab3e-4bca-87df-9efca31c2885", 00:14:31.286 "no_auto_visible": false 00:14:31.286 } 00:14:31.286 } 00:14:31.286 }, 00:14:31.286 { 00:14:31.286 "method": "nvmf_subsystem_add_listener", 00:14:31.286 "params": { 00:14:31.286 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:31.286 "listen_address": { 00:14:31.286 "trtype": "TCP", 00:14:31.286 "adrfam": "IPv4", 00:14:31.286 "traddr": "10.0.0.2", 00:14:31.286 "trsvcid": "4420" 00:14:31.286 }, 00:14:31.286 "secure_channel": true 00:14:31.286 } 00:14:31.286 } 00:14:31.286 ] 00:14:31.286 } 00:14:31.286 ] 00:14:31.286 }' 00:14:31.286 08:27:23 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:14:31.554 08:27:23 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:14:31.554 "subsystems": [ 00:14:31.554 { 00:14:31.554 "subsystem": "keyring", 00:14:31.554 "config": [] 00:14:31.554 }, 00:14:31.554 { 00:14:31.554 "subsystem": "iobuf", 00:14:31.554 "config": [ 00:14:31.554 { 00:14:31.554 "method": "iobuf_set_options", 00:14:31.554 "params": { 00:14:31.554 "small_pool_count": 8192, 00:14:31.554 "large_pool_count": 1024, 00:14:31.554 "small_bufsize": 8192, 00:14:31.554 "large_bufsize": 135168 00:14:31.554 } 00:14:31.554 } 00:14:31.554 ] 00:14:31.554 }, 00:14:31.554 { 00:14:31.554 "subsystem": "sock", 00:14:31.554 "config": [ 00:14:31.554 { 00:14:31.554 "method": "sock_set_default_impl", 00:14:31.554 "params": { 00:14:31.554 "impl_name": "uring" 00:14:31.554 } 00:14:31.554 }, 00:14:31.554 { 00:14:31.554 "method": "sock_impl_set_options", 00:14:31.554 "params": { 00:14:31.554 "impl_name": "ssl", 00:14:31.554 "recv_buf_size": 4096, 00:14:31.554 "send_buf_size": 4096, 00:14:31.554 "enable_recv_pipe": true, 00:14:31.554 "enable_quickack": false, 00:14:31.554 "enable_placement_id": 0, 00:14:31.554 "enable_zerocopy_send_server": true, 00:14:31.554 "enable_zerocopy_send_client": false, 00:14:31.554 "zerocopy_threshold": 0, 00:14:31.554 "tls_version": 0, 00:14:31.554 "enable_ktls": false 00:14:31.554 } 00:14:31.554 }, 00:14:31.554 { 00:14:31.554 "method": "sock_impl_set_options", 00:14:31.554 "params": { 00:14:31.554 "impl_name": "posix", 00:14:31.554 "recv_buf_size": 2097152, 00:14:31.554 "send_buf_size": 2097152, 00:14:31.554 "enable_recv_pipe": true, 00:14:31.554 "enable_quickack": false, 00:14:31.554 "enable_placement_id": 0, 00:14:31.554 "enable_zerocopy_send_server": true, 00:14:31.554 "enable_zerocopy_send_client": false, 00:14:31.554 "zerocopy_threshold": 0, 00:14:31.554 "tls_version": 0, 00:14:31.554 "enable_ktls": false 00:14:31.554 } 00:14:31.554 }, 00:14:31.554 { 00:14:31.554 "method": "sock_impl_set_options", 00:14:31.554 "params": { 00:14:31.554 "impl_name": "uring", 00:14:31.554 "recv_buf_size": 2097152, 00:14:31.554 "send_buf_size": 2097152, 00:14:31.554 "enable_recv_pipe": true, 00:14:31.554 "enable_quickack": false, 00:14:31.554 "enable_placement_id": 0, 00:14:31.554 "enable_zerocopy_send_server": false, 00:14:31.554 "enable_zerocopy_send_client": false, 00:14:31.554 "zerocopy_threshold": 0, 00:14:31.554 "tls_version": 0, 00:14:31.554 "enable_ktls": false 00:14:31.554 } 00:14:31.554 } 00:14:31.554 ] 00:14:31.554 }, 00:14:31.554 { 00:14:31.554 "subsystem": "vmd", 00:14:31.554 "config": [] 00:14:31.554 }, 00:14:31.554 { 00:14:31.554 "subsystem": "accel", 00:14:31.554 "config": [ 00:14:31.554 { 00:14:31.554 "method": "accel_set_options", 00:14:31.555 "params": { 00:14:31.555 "small_cache_size": 128, 00:14:31.555 "large_cache_size": 16, 00:14:31.555 "task_count": 2048, 00:14:31.555 "sequence_count": 2048, 00:14:31.555 "buf_count": 2048 00:14:31.555 } 00:14:31.555 } 00:14:31.555 ] 00:14:31.555 }, 00:14:31.555 { 00:14:31.555 "subsystem": "bdev", 00:14:31.555 "config": [ 00:14:31.555 { 00:14:31.555 "method": "bdev_set_options", 00:14:31.555 "params": { 00:14:31.555 "bdev_io_pool_size": 65535, 00:14:31.555 "bdev_io_cache_size": 256, 00:14:31.555 "bdev_auto_examine": true, 00:14:31.555 "iobuf_small_cache_size": 128, 00:14:31.555 "iobuf_large_cache_size": 16 00:14:31.555 } 00:14:31.555 }, 00:14:31.555 { 00:14:31.555 "method": "bdev_raid_set_options", 00:14:31.555 "params": { 00:14:31.555 "process_window_size_kb": 1024 00:14:31.555 } 00:14:31.555 }, 00:14:31.555 { 00:14:31.555 "method": "bdev_iscsi_set_options", 00:14:31.555 "params": { 00:14:31.555 "timeout_sec": 30 00:14:31.555 } 00:14:31.555 }, 00:14:31.555 { 00:14:31.555 "method": "bdev_nvme_set_options", 00:14:31.555 "params": { 00:14:31.555 "action_on_timeout": "none", 00:14:31.555 "timeout_us": 0, 00:14:31.555 "timeout_admin_us": 0, 00:14:31.555 "keep_alive_timeout_ms": 10000, 00:14:31.555 "arbitration_burst": 0, 00:14:31.555 "low_priority_weight": 0, 00:14:31.555 "medium_priority_weight": 0, 00:14:31.555 "high_priority_weight": 0, 00:14:31.555 "nvme_adminq_poll_period_us": 10000, 00:14:31.555 "nvme_ioq_poll_period_us": 0, 00:14:31.555 "io_queue_requests": 512, 00:14:31.555 "delay_cmd_submit": true, 00:14:31.555 "transport_retry_count": 4, 00:14:31.555 "bdev_retry_count": 3, 00:14:31.555 "transport_ack_timeout": 0, 00:14:31.555 "ctrlr_loss_timeout_sec": 0, 00:14:31.555 "reconnect_delay_sec": 0, 00:14:31.555 "fast_io_fail_timeout_sec": 0, 00:14:31.555 "disable_auto_failback": false, 00:14:31.555 "generate_uuids": false, 00:14:31.555 "transport_tos": 0, 00:14:31.555 "nvme_error_stat": false, 00:14:31.555 "rdma_srq_size": 0, 00:14:31.555 "io_path_stat": false, 00:14:31.555 "allow_accel_sequence": false, 00:14:31.555 "rdma_max_cq_size": 0, 00:14:31.555 "rdma_cm_event_timeout_ms": 0, 00:14:31.555 "dhchap_digests": [ 00:14:31.555 "sha256", 00:14:31.555 "sha384", 00:14:31.555 "sha512" 00:14:31.555 ], 00:14:31.555 "dhchap_dhgroups": [ 00:14:31.555 "null", 00:14:31.555 "ffdhe2048", 00:14:31.555 "ffdhe3072", 00:14:31.555 "ffdhe4096", 00:14:31.555 "ffdhe6144", 00:14:31.555 "ffdhe8192" 00:14:31.555 ] 00:14:31.555 } 00:14:31.555 }, 00:14:31.555 { 00:14:31.555 "method": "bdev_nvme_attach_controller", 00:14:31.555 "params": { 00:14:31.555 "name": "TLSTEST", 00:14:31.555 "trtype": "TCP", 00:14:31.555 "adrfam": "IPv4", 00:14:31.555 "traddr": "10.0.0.2", 00:14:31.555 "trsvcid": "4420", 00:14:31.555 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:31.555 "prchk_reftag": false, 00:14:31.555 "prchk_guard": false, 00:14:31.555 "ctrlr_loss_timeout_sec": 0, 00:14:31.555 "reconnect_delay_sec": 0, 00:14:31.555 "fast_io_fail_timeout_sec": 0, 00:14:31.555 "psk": "/tmp/tmp.nNGlligVhF", 00:14:31.555 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:31.555 "hdgst": false, 00:14:31.555 "ddgst": false 00:14:31.555 } 00:14:31.555 }, 00:14:31.555 { 00:14:31.555 "method": "bdev_nvme_set_hotplug", 00:14:31.555 "params": { 00:14:31.555 "period_us": 100000, 00:14:31.555 "enable": false 00:14:31.555 } 00:14:31.555 }, 00:14:31.555 { 00:14:31.555 "method": "bdev_wait_for_examine" 00:14:31.555 } 00:14:31.555 ] 00:14:31.555 }, 00:14:31.555 { 00:14:31.555 "subsystem": "nbd", 00:14:31.555 "config": [] 00:14:31.555 } 00:14:31.555 ] 00:14:31.555 }' 00:14:31.555 08:27:23 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 73817 00:14:31.555 08:27:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73817 ']' 00:14:31.555 08:27:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73817 00:14:31.555 08:27:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:31.555 08:27:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:31.555 08:27:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73817 00:14:31.555 killing process with pid 73817 00:14:31.555 Received shutdown signal, test time was about 10.000000 seconds 00:14:31.555 00:14:31.555 Latency(us) 00:14:31.555 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:31.555 =================================================================================================================== 00:14:31.555 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:31.555 08:27:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:31.555 08:27:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:31.555 08:27:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73817' 00:14:31.555 08:27:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73817 00:14:31.555 [2024-07-15 08:27:23.511267] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:31.555 08:27:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73817 00:14:31.813 08:27:23 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 73762 00:14:31.813 08:27:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73762 ']' 00:14:31.813 08:27:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73762 00:14:31.813 08:27:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:31.813 08:27:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:31.813 08:27:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73762 00:14:31.813 killing process with pid 73762 00:14:31.813 08:27:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:31.813 08:27:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:31.813 08:27:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73762' 00:14:31.813 08:27:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73762 00:14:31.813 [2024-07-15 08:27:23.764037] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:31.813 08:27:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73762 00:14:32.071 08:27:24 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:14:32.071 08:27:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:32.071 08:27:24 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:14:32.071 "subsystems": [ 00:14:32.071 { 00:14:32.071 "subsystem": "keyring", 00:14:32.071 "config": [] 00:14:32.071 }, 00:14:32.071 { 00:14:32.071 "subsystem": "iobuf", 00:14:32.071 "config": [ 00:14:32.071 { 00:14:32.071 "method": "iobuf_set_options", 00:14:32.071 "params": { 00:14:32.071 "small_pool_count": 8192, 00:14:32.071 "large_pool_count": 1024, 00:14:32.071 "small_bufsize": 8192, 00:14:32.071 "large_bufsize": 135168 00:14:32.071 } 00:14:32.071 } 00:14:32.071 ] 00:14:32.071 }, 00:14:32.071 { 00:14:32.071 "subsystem": "sock", 00:14:32.071 "config": [ 00:14:32.071 { 00:14:32.071 "method": "sock_set_default_impl", 00:14:32.071 "params": { 00:14:32.071 "impl_name": "uring" 00:14:32.071 } 00:14:32.071 }, 00:14:32.071 { 00:14:32.071 "method": "sock_impl_set_options", 00:14:32.071 "params": { 00:14:32.071 "impl_name": "ssl", 00:14:32.071 "recv_buf_size": 4096, 00:14:32.071 "send_buf_size": 4096, 00:14:32.071 "enable_recv_pipe": true, 00:14:32.071 "enable_quickack": false, 00:14:32.071 "enable_placement_id": 0, 00:14:32.071 "enable_zerocopy_send_server": true, 00:14:32.071 "enable_zerocopy_send_client": false, 00:14:32.071 "zerocopy_threshold": 0, 00:14:32.071 "tls_version": 0, 00:14:32.072 "enable_ktls": false 00:14:32.072 } 00:14:32.072 }, 00:14:32.072 { 00:14:32.072 "method": "sock_impl_set_options", 00:14:32.072 "params": { 00:14:32.072 "impl_name": "posix", 00:14:32.072 "recv_buf_size": 2097152, 00:14:32.072 "send_buf_size": 2097152, 00:14:32.072 "enable_recv_pipe": true, 00:14:32.072 "enable_quickack": false, 00:14:32.072 "enable_placement_id": 0, 00:14:32.072 "enable_zerocopy_send_server": true, 00:14:32.072 "enable_zerocopy_send_client": false, 00:14:32.072 "zerocopy_threshold": 0, 00:14:32.072 "tls_version": 0, 00:14:32.072 "enable_ktls": false 00:14:32.072 } 00:14:32.072 }, 00:14:32.072 { 00:14:32.072 "method": "sock_impl_set_options", 00:14:32.072 "params": { 00:14:32.072 "impl_name": "uring", 00:14:32.072 "recv_buf_size": 2097152, 00:14:32.072 "send_buf_size": 2097152, 00:14:32.072 "enable_recv_pipe": true, 00:14:32.072 "enable_quickack": false, 00:14:32.072 "enable_placement_id": 0, 00:14:32.072 "enable_zerocopy_send_server": false, 00:14:32.072 "enable_zerocopy_send_client": false, 00:14:32.072 "zerocopy_threshold": 0, 00:14:32.072 "tls_version": 0, 00:14:32.072 "enable_ktls": false 00:14:32.072 } 00:14:32.072 } 00:14:32.072 ] 00:14:32.072 }, 00:14:32.072 { 00:14:32.072 "subsystem": "vmd", 00:14:32.072 "config": [] 00:14:32.072 }, 00:14:32.072 { 00:14:32.072 "subsystem": "accel", 00:14:32.072 "config": [ 00:14:32.072 { 00:14:32.072 "method": "accel_set_options", 00:14:32.072 "params": { 00:14:32.072 "small_cache_size": 128, 00:14:32.072 "large_cache_size": 16, 00:14:32.072 "task_count": 2048, 00:14:32.072 "sequence_count": 2048, 00:14:32.072 "buf_count": 2048 00:14:32.072 } 00:14:32.072 } 00:14:32.072 ] 00:14:32.072 }, 00:14:32.072 { 00:14:32.072 "subsystem": "bdev", 00:14:32.072 "config": [ 00:14:32.072 { 00:14:32.072 "method": "bdev_set_options", 00:14:32.072 "params": { 00:14:32.072 "bdev_io_pool_size": 65535, 00:14:32.072 "bdev_io_cache_size": 256, 00:14:32.072 "bdev_auto_examine": true, 00:14:32.072 "iobuf_small_cache_size": 128, 00:14:32.072 "iobuf_large_cache_size": 16 00:14:32.072 } 00:14:32.072 }, 00:14:32.072 { 00:14:32.072 "method": "bdev_raid_set_options", 00:14:32.072 "params": { 00:14:32.072 "process_window_size_kb": 1024 00:14:32.072 } 00:14:32.072 }, 00:14:32.072 { 00:14:32.072 "method": "bdev_iscsi_set_options", 00:14:32.072 "params": { 00:14:32.072 "timeout_sec": 30 00:14:32.072 } 00:14:32.072 }, 00:14:32.072 { 00:14:32.072 "method": "bdev_nvme_set_options", 00:14:32.072 "params": { 00:14:32.072 "action_on_timeout": "none", 00:14:32.072 "timeout_us": 0, 00:14:32.072 "timeout_admin_us": 0, 00:14:32.072 "keep_alive_timeout_ms": 10000, 00:14:32.072 "arbitration_burst": 0, 00:14:32.072 "low_priority_weight": 0, 00:14:32.072 "medium_priority_weight": 0, 00:14:32.072 "high_priority_weight": 0, 00:14:32.072 "nvme_adminq_poll_period_us": 10000, 00:14:32.072 "nvme_ioq_poll_period_us": 0, 00:14:32.072 "io_queue_requests": 0, 00:14:32.072 "delay_cmd_submit": true, 00:14:32.072 "transport_retry_count": 4, 00:14:32.072 "bdev_retry_count": 3, 00:14:32.072 "transport_ack_timeout": 0, 00:14:32.072 "ctrlr_loss_timeout_sec": 0, 00:14:32.072 "reconnect_delay_sec": 0, 00:14:32.072 "fast_io_fail_timeout_sec": 0, 00:14:32.072 "disable_auto_failback": false, 00:14:32.072 "generate_uuids": false, 00:14:32.072 "transport_tos": 0, 00:14:32.072 "nvme_error_stat": false, 00:14:32.072 "rdma_srq_size": 0, 00:14:32.072 "io_path_stat": false, 00:14:32.072 "allow_accel_sequence": false, 00:14:32.072 "rdma_max_cq_size": 0, 00:14:32.072 "rdma_cm_event_timeout_ms": 0, 00:14:32.072 "dhchap_digests": [ 00:14:32.072 "sha256", 00:14:32.072 "sha384", 00:14:32.072 "sha512" 00:14:32.072 ], 00:14:32.072 "dhchap_dhgroups": [ 00:14:32.072 "null", 00:14:32.072 "ffdhe2048", 00:14:32.072 "ffdhe3072", 00:14:32.072 "ffdhe4096", 00:14:32.072 "ffdhe6144", 00:14:32.072 "ffdhe8192" 00:14:32.072 ] 00:14:32.072 } 00:14:32.072 }, 00:14:32.072 { 00:14:32.072 "method": "bdev_nvme_set_hotplug", 00:14:32.072 "params": { 00:14:32.072 "period_us": 100000, 00:14:32.072 "enable": false 00:14:32.072 } 00:14:32.072 }, 00:14:32.072 { 00:14:32.072 "method": "bdev_malloc_create", 00:14:32.072 "params": { 00:14:32.072 "name": "malloc0", 00:14:32.072 "num_blocks": 8192, 00:14:32.072 "block_size": 4096, 00:14:32.072 "physical_block_size": 4096, 00:14:32.072 "uuid": "d45c572a-ab3e-4bca-87df-9efca31c2885", 00:14:32.072 "optimal_io_boundary": 0 00:14:32.072 } 00:14:32.072 }, 00:14:32.072 { 00:14:32.072 "method": "bdev_wait_for_examine" 00:14:32.072 } 00:14:32.072 ] 00:14:32.072 }, 00:14:32.072 { 00:14:32.072 "subsystem": "nbd", 00:14:32.072 "config": [] 00:14:32.072 }, 00:14:32.072 { 00:14:32.072 "subsystem": "scheduler", 00:14:32.072 "config": [ 00:14:32.072 { 00:14:32.072 "method": "framework_set_scheduler", 00:14:32.072 "params": { 00:14:32.072 "name": "static" 00:14:32.072 } 00:14:32.072 } 00:14:32.072 ] 00:14:32.072 }, 00:14:32.072 { 00:14:32.072 "subsystem": "nvmf", 00:14:32.072 "config": [ 00:14:32.072 { 00:14:32.072 "method": "nvmf_set_config", 00:14:32.072 "params": { 00:14:32.072 "discovery_filter": "match_any", 00:14:32.072 "admin_cmd_passthru": { 00:14:32.072 "identify_ctrlr": false 00:14:32.072 } 00:14:32.072 } 00:14:32.072 }, 00:14:32.072 { 00:14:32.072 "method": "nvmf_set_max_subsystems", 00:14:32.072 "params": { 00:14:32.072 "max_subsystems": 1024 00:14:32.072 } 00:14:32.072 }, 00:14:32.072 { 00:14:32.072 "method": "nvmf_set_crdt", 00:14:32.072 "params": { 00:14:32.072 "crdt1": 0, 00:14:32.072 "crdt2": 0, 00:14:32.072 "crdt3": 0 00:14:32.072 } 00:14:32.072 }, 00:14:32.072 { 00:14:32.072 "method": "nvmf_create_transport", 00:14:32.072 "params": { 00:14:32.072 "trtype": "TCP", 00:14:32.072 "max_queue_depth": 128, 00:14:32.072 "max_io_qpairs_per_ctrlr": 127, 00:14:32.072 "in_capsule_data_size": 4096, 00:14:32.072 "max_io_size": 131072, 00:14:32.072 "io_unit_size": 131072, 00:14:32.072 "max_aq_depth": 128, 00:14:32.072 "num_shared_buffers": 511, 00:14:32.072 "buf_cache_size": 4294967295, 00:14:32.072 "dif_insert_or_strip": false, 00:14:32.072 "zcopy": false, 00:14:32.072 "c2h_success": false, 00:14:32.072 "sock_priority": 0, 00:14:32.072 "abort_timeout_sec": 1, 00:14:32.072 "ack_timeout": 0, 00:14:32.072 "data_wr_pool_size": 0 00:14:32.072 } 00:14:32.072 }, 00:14:32.072 { 00:14:32.072 "method": "nvmf_create_subsystem", 00:14:32.072 "params": { 00:14:32.072 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:32.072 "allow_any_host": false, 00:14:32.072 "serial_number": "SPDK00000000000001", 00:14:32.072 "model_number": "SPDK bdev Controller", 00:14:32.072 "max_namespaces": 10, 00:14:32.072 "min_cntlid": 1, 00:14:32.072 "max_cntlid": 65519, 00:14:32.072 "ana_reporting": false 00:14:32.072 } 00:14:32.072 }, 00:14:32.072 { 00:14:32.072 "method": "nvmf_subsystem_add_host", 00:14:32.072 "params": { 00:14:32.072 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:32.072 "host": "nqn.2016-06.io.spdk:host1", 00:14:32.072 "psk": "/tmp/tmp.nNGlligVhF" 00:14:32.072 } 00:14:32.072 }, 00:14:32.072 { 00:14:32.072 "method": "nvmf_subsystem_add_ns", 00:14:32.072 "params": { 00:14:32.072 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:32.072 "namespace": { 00:14:32.072 "nsid": 1, 00:14:32.072 "bdev_name": "malloc0", 00:14:32.072 "nguid": "D45C572AAB3E4BCA87DF9EFCA31C2885", 00:14:32.072 "uuid": "d45c572a-ab3e-4bca-87df-9efca31c2885", 00:14:32.072 "no_auto_visible": false 00:14:32.072 } 00:14:32.072 } 00:14:32.072 }, 00:14:32.072 { 00:14:32.072 "method": "nvmf_subsystem_add_listener", 00:14:32.072 "params": { 00:14:32.072 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:32.072 "listen_address": { 00:14:32.072 "trtype": "TCP", 00:14:32.072 "adrfam": "IPv4", 00:14:32.072 "traddr": "10.0.0.2", 00:14:32.072 "trsvcid": "4420" 00:14:32.073 }, 00:14:32.073 "secure_channel": true 00:14:32.073 } 00:14:32.073 } 00:14:32.073 ] 00:14:32.073 } 00:14:32.073 ] 00:14:32.073 }' 00:14:32.073 08:27:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:32.073 08:27:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:32.073 08:27:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73864 00:14:32.073 08:27:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:14:32.073 08:27:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73864 00:14:32.073 08:27:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73864 ']' 00:14:32.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:32.073 08:27:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:32.073 08:27:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:32.073 08:27:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:32.073 08:27:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:32.073 08:27:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:32.073 [2024-07-15 08:27:24.074318] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:14:32.073 [2024-07-15 08:27:24.074444] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:32.073 [2024-07-15 08:27:24.220349] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:32.333 [2024-07-15 08:27:24.338736] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:32.333 [2024-07-15 08:27:24.338801] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:32.333 [2024-07-15 08:27:24.338814] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:32.333 [2024-07-15 08:27:24.338823] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:32.333 [2024-07-15 08:27:24.338830] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:32.333 [2024-07-15 08:27:24.338940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:32.333 [2024-07-15 08:27:24.506211] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:32.591 [2024-07-15 08:27:24.575543] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:32.591 [2024-07-15 08:27:24.591494] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:32.591 [2024-07-15 08:27:24.607469] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:32.591 [2024-07-15 08:27:24.607685] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:32.850 08:27:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:32.850 08:27:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:32.850 08:27:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:32.850 08:27:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:32.850 08:27:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:33.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:33.108 08:27:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:33.108 08:27:25 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=73892 00:14:33.108 08:27:25 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 73892 /var/tmp/bdevperf.sock 00:14:33.108 08:27:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73892 ']' 00:14:33.108 08:27:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:33.108 08:27:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:33.108 08:27:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:33.108 08:27:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:33.108 08:27:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:33.108 08:27:25 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:14:33.108 08:27:25 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:14:33.108 "subsystems": [ 00:14:33.108 { 00:14:33.108 "subsystem": "keyring", 00:14:33.108 "config": [] 00:14:33.108 }, 00:14:33.108 { 00:14:33.108 "subsystem": "iobuf", 00:14:33.108 "config": [ 00:14:33.108 { 00:14:33.108 "method": "iobuf_set_options", 00:14:33.108 "params": { 00:14:33.108 "small_pool_count": 8192, 00:14:33.108 "large_pool_count": 1024, 00:14:33.108 "small_bufsize": 8192, 00:14:33.108 "large_bufsize": 135168 00:14:33.108 } 00:14:33.108 } 00:14:33.108 ] 00:14:33.108 }, 00:14:33.108 { 00:14:33.108 "subsystem": "sock", 00:14:33.108 "config": [ 00:14:33.108 { 00:14:33.108 "method": "sock_set_default_impl", 00:14:33.108 "params": { 00:14:33.108 "impl_name": "uring" 00:14:33.108 } 00:14:33.108 }, 00:14:33.108 { 00:14:33.108 "method": "sock_impl_set_options", 00:14:33.108 "params": { 00:14:33.108 "impl_name": "ssl", 00:14:33.108 "recv_buf_size": 4096, 00:14:33.108 "send_buf_size": 4096, 00:14:33.108 "enable_recv_pipe": true, 00:14:33.108 "enable_quickack": false, 00:14:33.108 "enable_placement_id": 0, 00:14:33.108 "enable_zerocopy_send_server": true, 00:14:33.108 "enable_zerocopy_send_client": false, 00:14:33.108 "zerocopy_threshold": 0, 00:14:33.108 "tls_version": 0, 00:14:33.108 "enable_ktls": false 00:14:33.108 } 00:14:33.108 }, 00:14:33.108 { 00:14:33.108 "method": "sock_impl_set_options", 00:14:33.108 "params": { 00:14:33.108 "impl_name": "posix", 00:14:33.108 "recv_buf_size": 2097152, 00:14:33.108 "send_buf_size": 2097152, 00:14:33.108 "enable_recv_pipe": true, 00:14:33.108 "enable_quickack": false, 00:14:33.108 "enable_placement_id": 0, 00:14:33.108 "enable_zerocopy_send_server": true, 00:14:33.108 "enable_zerocopy_send_client": false, 00:14:33.108 "zerocopy_threshold": 0, 00:14:33.108 "tls_version": 0, 00:14:33.108 "enable_ktls": false 00:14:33.108 } 00:14:33.108 }, 00:14:33.108 { 00:14:33.108 "method": "sock_impl_set_options", 00:14:33.108 "params": { 00:14:33.108 "impl_name": "uring", 00:14:33.108 "recv_buf_size": 2097152, 00:14:33.108 "send_buf_size": 2097152, 00:14:33.108 "enable_recv_pipe": true, 00:14:33.108 "enable_quickack": false, 00:14:33.108 "enable_placement_id": 0, 00:14:33.108 "enable_zerocopy_send_server": false, 00:14:33.108 "enable_zerocopy_send_client": false, 00:14:33.108 "zerocopy_threshold": 0, 00:14:33.108 "tls_version": 0, 00:14:33.108 "enable_ktls": false 00:14:33.108 } 00:14:33.108 } 00:14:33.108 ] 00:14:33.108 }, 00:14:33.108 { 00:14:33.108 "subsystem": "vmd", 00:14:33.108 "config": [] 00:14:33.108 }, 00:14:33.108 { 00:14:33.108 "subsystem": "accel", 00:14:33.108 "config": [ 00:14:33.108 { 00:14:33.108 "method": "accel_set_options", 00:14:33.108 "params": { 00:14:33.108 "small_cache_size": 128, 00:14:33.108 "large_cache_size": 16, 00:14:33.108 "task_count": 2048, 00:14:33.108 "sequence_count": 2048, 00:14:33.108 "buf_count": 2048 00:14:33.108 } 00:14:33.108 } 00:14:33.108 ] 00:14:33.108 }, 00:14:33.108 { 00:14:33.108 "subsystem": "bdev", 00:14:33.108 "config": [ 00:14:33.108 { 00:14:33.108 "method": "bdev_set_options", 00:14:33.108 "params": { 00:14:33.108 "bdev_io_pool_size": 65535, 00:14:33.108 "bdev_io_cache_size": 256, 00:14:33.108 "bdev_auto_examine": true, 00:14:33.108 "iobuf_small_cache_size": 128, 00:14:33.108 "iobuf_large_cache_size": 16 00:14:33.108 } 00:14:33.108 }, 00:14:33.108 { 00:14:33.108 "method": "bdev_raid_set_options", 00:14:33.108 "params": { 00:14:33.108 "process_window_size_kb": 1024 00:14:33.108 } 00:14:33.108 }, 00:14:33.108 { 00:14:33.108 "method": "bdev_iscsi_set_options", 00:14:33.108 "params": { 00:14:33.108 "timeout_sec": 30 00:14:33.108 } 00:14:33.108 }, 00:14:33.108 { 00:14:33.108 "method": "bdev_nvme_set_options", 00:14:33.109 "params": { 00:14:33.109 "action_on_timeout": "none", 00:14:33.109 "timeout_us": 0, 00:14:33.109 "timeout_admin_us": 0, 00:14:33.109 "keep_alive_timeout_ms": 10000, 00:14:33.109 "arbitration_burst": 0, 00:14:33.109 "low_priority_weight": 0, 00:14:33.109 "medium_priority_weight": 0, 00:14:33.109 "high_priority_weight": 0, 00:14:33.109 "nvme_adminq_poll_period_us": 10000, 00:14:33.109 "nvme_ioq_poll_period_us": 0, 00:14:33.109 "io_queue_requests": 512, 00:14:33.109 "delay_cmd_submit": true, 00:14:33.109 "transport_retry_count": 4, 00:14:33.109 "bdev_retry_count": 3, 00:14:33.109 "transport_ack_timeout": 0, 00:14:33.109 "ctrlr_loss_timeout_sec": 0, 00:14:33.109 "reconnect_delay_sec": 0, 00:14:33.109 "fast_io_fail_timeout_sec": 0, 00:14:33.109 "disable_auto_failback": false, 00:14:33.109 "generate_uuids": false, 00:14:33.109 "transport_tos": 0, 00:14:33.109 "nvme_error_stat": false, 00:14:33.109 "rdma_srq_size": 0, 00:14:33.109 "io_path_stat": false, 00:14:33.109 "allow_accel_sequence": false, 00:14:33.109 "rdma_max_cq_size": 0, 00:14:33.109 "rdma_cm_event_timeout_ms": 0, 00:14:33.109 "dhchap_digests": [ 00:14:33.109 "sha256", 00:14:33.109 "sha384", 00:14:33.109 "sha512" 00:14:33.109 ], 00:14:33.109 "dhchap_dhgroups": [ 00:14:33.109 "null", 00:14:33.109 "ffdhe2048", 00:14:33.109 "ffdhe3072", 00:14:33.109 "ffdhe4096", 00:14:33.109 "ffdhe6144", 00:14:33.109 "ffdhe8192" 00:14:33.109 ] 00:14:33.109 } 00:14:33.109 }, 00:14:33.109 { 00:14:33.109 "method": "bdev_nvme_attach_controller", 00:14:33.109 "params": { 00:14:33.109 "name": "TLSTEST", 00:14:33.109 "trtype": "TCP", 00:14:33.109 "adrfam": "IPv4", 00:14:33.109 "traddr": "10.0.0.2", 00:14:33.109 "trsvcid": "4420", 00:14:33.109 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:33.109 "prchk_reftag": false, 00:14:33.109 "prchk_guard": false, 00:14:33.109 "ctrlr_loss_timeout_sec": 0, 00:14:33.109 "reconnect_delay_sec": 0, 00:14:33.109 "fast_io_fail_timeout_sec": 0, 00:14:33.109 "psk": "/tmp/tmp.nNGlligVhF", 00:14:33.109 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:33.109 "hdgst": false, 00:14:33.109 "ddgst": false 00:14:33.109 } 00:14:33.109 }, 00:14:33.109 { 00:14:33.109 "method": "bdev_nvme_set_hotplug", 00:14:33.109 "params": { 00:14:33.109 "period_us": 100000, 00:14:33.109 "enable": false 00:14:33.109 } 00:14:33.109 }, 00:14:33.109 { 00:14:33.109 "method": "bdev_wait_for_examine" 00:14:33.109 } 00:14:33.109 ] 00:14:33.109 }, 00:14:33.109 { 00:14:33.109 "subsystem": "nbd", 00:14:33.109 "config": [] 00:14:33.109 } 00:14:33.109 ] 00:14:33.109 }' 00:14:33.109 [2024-07-15 08:27:25.113060] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:14:33.109 [2024-07-15 08:27:25.113469] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73892 ] 00:14:33.109 [2024-07-15 08:27:25.254080] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:33.367 [2024-07-15 08:27:25.383225] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:33.367 [2024-07-15 08:27:25.518618] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:33.625 [2024-07-15 08:27:25.557649] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:33.625 [2024-07-15 08:27:25.558053] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:34.192 08:27:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:34.192 08:27:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:34.192 08:27:26 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:34.192 Running I/O for 10 seconds... 00:14:44.200 00:14:44.200 Latency(us) 00:14:44.200 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:44.200 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:44.200 Verification LBA range: start 0x0 length 0x2000 00:14:44.200 TLSTESTn1 : 10.01 3919.85 15.31 0.00 0.00 32601.53 5034.36 36223.53 00:14:44.200 =================================================================================================================== 00:14:44.200 Total : 3919.85 15.31 0.00 0.00 32601.53 5034.36 36223.53 00:14:44.200 0 00:14:44.200 08:27:36 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:44.200 08:27:36 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 73892 00:14:44.200 08:27:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73892 ']' 00:14:44.200 08:27:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73892 00:14:44.200 08:27:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:44.200 08:27:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:44.200 08:27:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73892 00:14:44.200 killing process with pid 73892 00:14:44.200 Received shutdown signal, test time was about 10.000000 seconds 00:14:44.200 00:14:44.200 Latency(us) 00:14:44.200 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:44.200 =================================================================================================================== 00:14:44.200 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:44.200 08:27:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:44.200 08:27:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:44.200 08:27:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73892' 00:14:44.200 08:27:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73892 00:14:44.200 [2024-07-15 08:27:36.315211] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:44.200 08:27:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73892 00:14:44.458 08:27:36 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 73864 00:14:44.459 08:27:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73864 ']' 00:14:44.459 08:27:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73864 00:14:44.459 08:27:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:44.459 08:27:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:44.459 08:27:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73864 00:14:44.459 killing process with pid 73864 00:14:44.459 08:27:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:44.459 08:27:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:44.459 08:27:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73864' 00:14:44.459 08:27:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73864 00:14:44.459 [2024-07-15 08:27:36.566928] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:44.459 08:27:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73864 00:14:44.717 08:27:36 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:14:44.717 08:27:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:44.717 08:27:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:44.717 08:27:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:44.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:44.717 08:27:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=74038 00:14:44.717 08:27:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:44.717 08:27:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 74038 00:14:44.717 08:27:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 74038 ']' 00:14:44.717 08:27:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:44.717 08:27:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:44.717 08:27:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:44.717 08:27:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:44.717 08:27:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:44.717 [2024-07-15 08:27:36.853614] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:14:44.718 [2024-07-15 08:27:36.853704] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:44.976 [2024-07-15 08:27:36.989242] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:44.976 [2024-07-15 08:27:37.096790] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:44.976 [2024-07-15 08:27:37.096848] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:44.976 [2024-07-15 08:27:37.096860] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:44.976 [2024-07-15 08:27:37.096869] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:44.976 [2024-07-15 08:27:37.096876] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:44.976 [2024-07-15 08:27:37.096901] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:44.976 [2024-07-15 08:27:37.148323] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:45.910 08:27:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:45.910 08:27:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:45.910 08:27:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:45.910 08:27:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:45.910 08:27:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:45.910 08:27:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:45.910 08:27:37 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.nNGlligVhF 00:14:45.910 08:27:37 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.nNGlligVhF 00:14:45.910 08:27:37 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:46.168 [2024-07-15 08:27:38.125831] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:46.168 08:27:38 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:46.427 08:27:38 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:46.685 [2024-07-15 08:27:38.745975] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:46.685 [2024-07-15 08:27:38.746244] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:46.685 08:27:38 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:46.944 malloc0 00:14:46.944 08:27:39 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:47.202 08:27:39 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.nNGlligVhF 00:14:47.460 [2024-07-15 08:27:39.525004] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:47.460 08:27:39 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=74088 00:14:47.460 08:27:39 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:14:47.460 08:27:39 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:47.460 08:27:39 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 74088 /var/tmp/bdevperf.sock 00:14:47.460 08:27:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 74088 ']' 00:14:47.460 08:27:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:47.460 08:27:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:47.460 08:27:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:47.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:47.460 08:27:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:47.460 08:27:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:47.460 [2024-07-15 08:27:39.592214] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:14:47.460 [2024-07-15 08:27:39.592301] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74088 ] 00:14:47.718 [2024-07-15 08:27:39.728588] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:47.718 [2024-07-15 08:27:39.854554] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:47.976 [2024-07-15 08:27:39.910610] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:48.544 08:27:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:48.544 08:27:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:48.544 08:27:40 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.nNGlligVhF 00:14:48.803 08:27:40 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:49.062 [2024-07-15 08:27:41.117807] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:49.062 nvme0n1 00:14:49.062 08:27:41 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:49.321 Running I/O for 1 seconds... 00:14:50.256 00:14:50.256 Latency(us) 00:14:50.256 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:50.256 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:50.256 Verification LBA range: start 0x0 length 0x2000 00:14:50.256 nvme0n1 : 1.02 4089.74 15.98 0.00 0.00 30974.82 7119.59 22520.55 00:14:50.256 =================================================================================================================== 00:14:50.256 Total : 4089.74 15.98 0.00 0.00 30974.82 7119.59 22520.55 00:14:50.256 0 00:14:50.256 08:27:42 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 74088 00:14:50.256 08:27:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 74088 ']' 00:14:50.256 08:27:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 74088 00:14:50.256 08:27:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:50.256 08:27:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:50.256 08:27:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74088 00:14:50.256 killing process with pid 74088 00:14:50.256 Received shutdown signal, test time was about 1.000000 seconds 00:14:50.256 00:14:50.256 Latency(us) 00:14:50.256 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:50.256 =================================================================================================================== 00:14:50.256 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:50.256 08:27:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:50.256 08:27:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:50.256 08:27:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74088' 00:14:50.256 08:27:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 74088 00:14:50.256 08:27:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 74088 00:14:50.514 08:27:42 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 74038 00:14:50.514 08:27:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 74038 ']' 00:14:50.514 08:27:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 74038 00:14:50.514 08:27:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:50.514 08:27:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:50.514 08:27:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74038 00:14:50.514 killing process with pid 74038 00:14:50.514 08:27:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:50.514 08:27:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:50.514 08:27:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74038' 00:14:50.514 08:27:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 74038 00:14:50.514 [2024-07-15 08:27:42.656526] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:50.514 08:27:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 74038 00:14:50.772 08:27:42 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:14:50.772 08:27:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:50.772 08:27:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:50.772 08:27:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:50.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:50.772 08:27:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=74145 00:14:50.773 08:27:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:50.773 08:27:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 74145 00:14:50.773 08:27:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 74145 ']' 00:14:50.773 08:27:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:50.773 08:27:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:50.773 08:27:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:50.773 08:27:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:50.773 08:27:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:50.773 [2024-07-15 08:27:42.945610] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:14:50.773 [2024-07-15 08:27:42.946436] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:51.031 [2024-07-15 08:27:43.083301] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:51.031 [2024-07-15 08:27:43.196059] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:51.031 [2024-07-15 08:27:43.196308] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:51.031 [2024-07-15 08:27:43.196451] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:51.031 [2024-07-15 08:27:43.196578] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:51.031 [2024-07-15 08:27:43.196716] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:51.031 [2024-07-15 08:27:43.196798] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:51.288 [2024-07-15 08:27:43.249579] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:51.853 08:27:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:51.853 08:27:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:51.853 08:27:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:51.853 08:27:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:51.853 08:27:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:51.853 08:27:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:51.853 08:27:43 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:14:51.853 08:27:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.853 08:27:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:51.853 [2024-07-15 08:27:43.985518] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:51.853 malloc0 00:14:51.853 [2024-07-15 08:27:44.017106] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:51.853 [2024-07-15 08:27:44.017311] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:52.188 08:27:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.188 08:27:44 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=74177 00:14:52.188 08:27:44 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:14:52.188 08:27:44 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 74177 /var/tmp/bdevperf.sock 00:14:52.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:52.188 08:27:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 74177 ']' 00:14:52.188 08:27:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:52.188 08:27:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:52.189 08:27:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:52.189 08:27:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:52.189 08:27:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:52.189 [2024-07-15 08:27:44.104276] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:14:52.189 [2024-07-15 08:27:44.104415] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74177 ] 00:14:52.189 [2024-07-15 08:27:44.246660] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:52.477 [2024-07-15 08:27:44.378559] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:52.477 [2024-07-15 08:27:44.435837] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:53.043 08:27:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:53.043 08:27:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:53.043 08:27:45 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.nNGlligVhF 00:14:53.302 08:27:45 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:53.559 [2024-07-15 08:27:45.612678] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:53.559 nvme0n1 00:14:53.559 08:27:45 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:53.817 Running I/O for 1 seconds... 00:14:54.751 00:14:54.751 Latency(us) 00:14:54.751 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:54.751 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:54.751 Verification LBA range: start 0x0 length 0x2000 00:14:54.751 nvme0n1 : 1.02 3898.61 15.23 0.00 0.00 32494.94 6613.18 25976.09 00:14:54.751 =================================================================================================================== 00:14:54.751 Total : 3898.61 15.23 0.00 0.00 32494.94 6613.18 25976.09 00:14:54.751 0 00:14:54.751 08:27:46 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:14:54.751 08:27:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.751 08:27:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:55.009 08:27:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.009 08:27:46 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:14:55.009 "subsystems": [ 00:14:55.009 { 00:14:55.009 "subsystem": "keyring", 00:14:55.009 "config": [ 00:14:55.009 { 00:14:55.009 "method": "keyring_file_add_key", 00:14:55.009 "params": { 00:14:55.009 "name": "key0", 00:14:55.009 "path": "/tmp/tmp.nNGlligVhF" 00:14:55.009 } 00:14:55.009 } 00:14:55.009 ] 00:14:55.009 }, 00:14:55.009 { 00:14:55.009 "subsystem": "iobuf", 00:14:55.009 "config": [ 00:14:55.009 { 00:14:55.009 "method": "iobuf_set_options", 00:14:55.009 "params": { 00:14:55.009 "small_pool_count": 8192, 00:14:55.009 "large_pool_count": 1024, 00:14:55.009 "small_bufsize": 8192, 00:14:55.009 "large_bufsize": 135168 00:14:55.009 } 00:14:55.009 } 00:14:55.009 ] 00:14:55.009 }, 00:14:55.009 { 00:14:55.009 "subsystem": "sock", 00:14:55.009 "config": [ 00:14:55.009 { 00:14:55.009 "method": "sock_set_default_impl", 00:14:55.009 "params": { 00:14:55.009 "impl_name": "uring" 00:14:55.009 } 00:14:55.009 }, 00:14:55.009 { 00:14:55.009 "method": "sock_impl_set_options", 00:14:55.009 "params": { 00:14:55.009 "impl_name": "ssl", 00:14:55.009 "recv_buf_size": 4096, 00:14:55.009 "send_buf_size": 4096, 00:14:55.009 "enable_recv_pipe": true, 00:14:55.009 "enable_quickack": false, 00:14:55.009 "enable_placement_id": 0, 00:14:55.009 "enable_zerocopy_send_server": true, 00:14:55.009 "enable_zerocopy_send_client": false, 00:14:55.009 "zerocopy_threshold": 0, 00:14:55.009 "tls_version": 0, 00:14:55.009 "enable_ktls": false 00:14:55.009 } 00:14:55.009 }, 00:14:55.009 { 00:14:55.009 "method": "sock_impl_set_options", 00:14:55.009 "params": { 00:14:55.009 "impl_name": "posix", 00:14:55.009 "recv_buf_size": 2097152, 00:14:55.009 "send_buf_size": 2097152, 00:14:55.009 "enable_recv_pipe": true, 00:14:55.009 "enable_quickack": false, 00:14:55.009 "enable_placement_id": 0, 00:14:55.009 "enable_zerocopy_send_server": true, 00:14:55.009 "enable_zerocopy_send_client": false, 00:14:55.009 "zerocopy_threshold": 0, 00:14:55.009 "tls_version": 0, 00:14:55.009 "enable_ktls": false 00:14:55.009 } 00:14:55.009 }, 00:14:55.009 { 00:14:55.009 "method": "sock_impl_set_options", 00:14:55.009 "params": { 00:14:55.009 "impl_name": "uring", 00:14:55.009 "recv_buf_size": 2097152, 00:14:55.009 "send_buf_size": 2097152, 00:14:55.009 "enable_recv_pipe": true, 00:14:55.009 "enable_quickack": false, 00:14:55.009 "enable_placement_id": 0, 00:14:55.010 "enable_zerocopy_send_server": false, 00:14:55.010 "enable_zerocopy_send_client": false, 00:14:55.010 "zerocopy_threshold": 0, 00:14:55.010 "tls_version": 0, 00:14:55.010 "enable_ktls": false 00:14:55.010 } 00:14:55.010 } 00:14:55.010 ] 00:14:55.010 }, 00:14:55.010 { 00:14:55.010 "subsystem": "vmd", 00:14:55.010 "config": [] 00:14:55.010 }, 00:14:55.010 { 00:14:55.010 "subsystem": "accel", 00:14:55.010 "config": [ 00:14:55.010 { 00:14:55.010 "method": "accel_set_options", 00:14:55.010 "params": { 00:14:55.010 "small_cache_size": 128, 00:14:55.010 "large_cache_size": 16, 00:14:55.010 "task_count": 2048, 00:14:55.010 "sequence_count": 2048, 00:14:55.010 "buf_count": 2048 00:14:55.010 } 00:14:55.010 } 00:14:55.010 ] 00:14:55.010 }, 00:14:55.010 { 00:14:55.010 "subsystem": "bdev", 00:14:55.010 "config": [ 00:14:55.010 { 00:14:55.010 "method": "bdev_set_options", 00:14:55.010 "params": { 00:14:55.010 "bdev_io_pool_size": 65535, 00:14:55.010 "bdev_io_cache_size": 256, 00:14:55.010 "bdev_auto_examine": true, 00:14:55.010 "iobuf_small_cache_size": 128, 00:14:55.010 "iobuf_large_cache_size": 16 00:14:55.010 } 00:14:55.010 }, 00:14:55.010 { 00:14:55.010 "method": "bdev_raid_set_options", 00:14:55.010 "params": { 00:14:55.010 "process_window_size_kb": 1024 00:14:55.010 } 00:14:55.010 }, 00:14:55.010 { 00:14:55.010 "method": "bdev_iscsi_set_options", 00:14:55.010 "params": { 00:14:55.010 "timeout_sec": 30 00:14:55.010 } 00:14:55.010 }, 00:14:55.010 { 00:14:55.010 "method": "bdev_nvme_set_options", 00:14:55.010 "params": { 00:14:55.010 "action_on_timeout": "none", 00:14:55.010 "timeout_us": 0, 00:14:55.010 "timeout_admin_us": 0, 00:14:55.010 "keep_alive_timeout_ms": 10000, 00:14:55.010 "arbitration_burst": 0, 00:14:55.010 "low_priority_weight": 0, 00:14:55.010 "medium_priority_weight": 0, 00:14:55.010 "high_priority_weight": 0, 00:14:55.010 "nvme_adminq_poll_period_us": 10000, 00:14:55.010 "nvme_ioq_poll_period_us": 0, 00:14:55.010 "io_queue_requests": 0, 00:14:55.010 "delay_cmd_submit": true, 00:14:55.010 "transport_retry_count": 4, 00:14:55.010 "bdev_retry_count": 3, 00:14:55.010 "transport_ack_timeout": 0, 00:14:55.010 "ctrlr_loss_timeout_sec": 0, 00:14:55.010 "reconnect_delay_sec": 0, 00:14:55.010 "fast_io_fail_timeout_sec": 0, 00:14:55.010 "disable_auto_failback": false, 00:14:55.010 "generate_uuids": false, 00:14:55.010 "transport_tos": 0, 00:14:55.010 "nvme_error_stat": false, 00:14:55.010 "rdma_srq_size": 0, 00:14:55.010 "io_path_stat": false, 00:14:55.010 "allow_accel_sequence": false, 00:14:55.010 "rdma_max_cq_size": 0, 00:14:55.010 "rdma_cm_event_timeout_ms": 0, 00:14:55.010 "dhchap_digests": [ 00:14:55.010 "sha256", 00:14:55.010 "sha384", 00:14:55.010 "sha512" 00:14:55.010 ], 00:14:55.010 "dhchap_dhgroups": [ 00:14:55.010 "null", 00:14:55.010 "ffdhe2048", 00:14:55.010 "ffdhe3072", 00:14:55.010 "ffdhe4096", 00:14:55.010 "ffdhe6144", 00:14:55.010 "ffdhe8192" 00:14:55.010 ] 00:14:55.010 } 00:14:55.010 }, 00:14:55.010 { 00:14:55.010 "method": "bdev_nvme_set_hotplug", 00:14:55.010 "params": { 00:14:55.010 "period_us": 100000, 00:14:55.010 "enable": false 00:14:55.010 } 00:14:55.010 }, 00:14:55.010 { 00:14:55.010 "method": "bdev_malloc_create", 00:14:55.010 "params": { 00:14:55.010 "name": "malloc0", 00:14:55.010 "num_blocks": 8192, 00:14:55.010 "block_size": 4096, 00:14:55.010 "physical_block_size": 4096, 00:14:55.010 "uuid": "dc0d9ead-a697-4810-9adb-393f61c9668b", 00:14:55.010 "optimal_io_boundary": 0 00:14:55.010 } 00:14:55.010 }, 00:14:55.010 { 00:14:55.010 "method": "bdev_wait_for_examine" 00:14:55.010 } 00:14:55.010 ] 00:14:55.010 }, 00:14:55.010 { 00:14:55.010 "subsystem": "nbd", 00:14:55.010 "config": [] 00:14:55.010 }, 00:14:55.010 { 00:14:55.010 "subsystem": "scheduler", 00:14:55.010 "config": [ 00:14:55.010 { 00:14:55.010 "method": "framework_set_scheduler", 00:14:55.010 "params": { 00:14:55.010 "name": "static" 00:14:55.010 } 00:14:55.010 } 00:14:55.010 ] 00:14:55.010 }, 00:14:55.010 { 00:14:55.010 "subsystem": "nvmf", 00:14:55.010 "config": [ 00:14:55.010 { 00:14:55.010 "method": "nvmf_set_config", 00:14:55.010 "params": { 00:14:55.010 "discovery_filter": "match_any", 00:14:55.010 "admin_cmd_passthru": { 00:14:55.010 "identify_ctrlr": false 00:14:55.010 } 00:14:55.010 } 00:14:55.010 }, 00:14:55.010 { 00:14:55.010 "method": "nvmf_set_max_subsystems", 00:14:55.010 "params": { 00:14:55.010 "max_subsystems": 1024 00:14:55.010 } 00:14:55.010 }, 00:14:55.010 { 00:14:55.010 "method": "nvmf_set_crdt", 00:14:55.010 "params": { 00:14:55.010 "crdt1": 0, 00:14:55.010 "crdt2": 0, 00:14:55.010 "crdt3": 0 00:14:55.010 } 00:14:55.010 }, 00:14:55.010 { 00:14:55.010 "method": "nvmf_create_transport", 00:14:55.010 "params": { 00:14:55.010 "trtype": "TCP", 00:14:55.010 "max_queue_depth": 128, 00:14:55.010 "max_io_qpairs_per_ctrlr": 127, 00:14:55.010 "in_capsule_data_size": 4096, 00:14:55.010 "max_io_size": 131072, 00:14:55.010 "io_unit_size": 131072, 00:14:55.010 "max_aq_depth": 128, 00:14:55.010 "num_shared_buffers": 511, 00:14:55.010 "buf_cache_size": 4294967295, 00:14:55.010 "dif_insert_or_strip": false, 00:14:55.010 "zcopy": false, 00:14:55.010 "c2h_success": false, 00:14:55.010 "sock_priority": 0, 00:14:55.010 "abort_timeout_sec": 1, 00:14:55.010 "ack_timeout": 0, 00:14:55.010 "data_wr_pool_size": 0 00:14:55.010 } 00:14:55.010 }, 00:14:55.010 { 00:14:55.010 "method": "nvmf_create_subsystem", 00:14:55.010 "params": { 00:14:55.010 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:55.010 "allow_any_host": false, 00:14:55.010 "serial_number": "00000000000000000000", 00:14:55.010 "model_number": "SPDK bdev Controller", 00:14:55.010 "max_namespaces": 32, 00:14:55.010 "min_cntlid": 1, 00:14:55.010 "max_cntlid": 65519, 00:14:55.010 "ana_reporting": false 00:14:55.010 } 00:14:55.010 }, 00:14:55.010 { 00:14:55.010 "method": "nvmf_subsystem_add_host", 00:14:55.010 "params": { 00:14:55.010 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:55.010 "host": "nqn.2016-06.io.spdk:host1", 00:14:55.010 "psk": "key0" 00:14:55.010 } 00:14:55.010 }, 00:14:55.010 { 00:14:55.010 "method": "nvmf_subsystem_add_ns", 00:14:55.010 "params": { 00:14:55.010 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:55.010 "namespace": { 00:14:55.010 "nsid": 1, 00:14:55.010 "bdev_name": "malloc0", 00:14:55.010 "nguid": "DC0D9EADA69748109ADB393F61C9668B", 00:14:55.010 "uuid": "dc0d9ead-a697-4810-9adb-393f61c9668b", 00:14:55.010 "no_auto_visible": false 00:14:55.010 } 00:14:55.010 } 00:14:55.010 }, 00:14:55.010 { 00:14:55.010 "method": "nvmf_subsystem_add_listener", 00:14:55.010 "params": { 00:14:55.010 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:55.010 "listen_address": { 00:14:55.010 "trtype": "TCP", 00:14:55.010 "adrfam": "IPv4", 00:14:55.010 "traddr": "10.0.0.2", 00:14:55.010 "trsvcid": "4420" 00:14:55.010 }, 00:14:55.010 "secure_channel": true 00:14:55.010 } 00:14:55.010 } 00:14:55.010 ] 00:14:55.010 } 00:14:55.010 ] 00:14:55.010 }' 00:14:55.010 08:27:46 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:14:55.268 08:27:47 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:14:55.268 "subsystems": [ 00:14:55.268 { 00:14:55.268 "subsystem": "keyring", 00:14:55.268 "config": [ 00:14:55.268 { 00:14:55.268 "method": "keyring_file_add_key", 00:14:55.268 "params": { 00:14:55.268 "name": "key0", 00:14:55.268 "path": "/tmp/tmp.nNGlligVhF" 00:14:55.268 } 00:14:55.268 } 00:14:55.268 ] 00:14:55.268 }, 00:14:55.268 { 00:14:55.268 "subsystem": "iobuf", 00:14:55.268 "config": [ 00:14:55.268 { 00:14:55.268 "method": "iobuf_set_options", 00:14:55.268 "params": { 00:14:55.268 "small_pool_count": 8192, 00:14:55.268 "large_pool_count": 1024, 00:14:55.268 "small_bufsize": 8192, 00:14:55.268 "large_bufsize": 135168 00:14:55.268 } 00:14:55.268 } 00:14:55.268 ] 00:14:55.268 }, 00:14:55.268 { 00:14:55.268 "subsystem": "sock", 00:14:55.268 "config": [ 00:14:55.268 { 00:14:55.268 "method": "sock_set_default_impl", 00:14:55.268 "params": { 00:14:55.268 "impl_name": "uring" 00:14:55.268 } 00:14:55.268 }, 00:14:55.268 { 00:14:55.268 "method": "sock_impl_set_options", 00:14:55.268 "params": { 00:14:55.268 "impl_name": "ssl", 00:14:55.268 "recv_buf_size": 4096, 00:14:55.268 "send_buf_size": 4096, 00:14:55.268 "enable_recv_pipe": true, 00:14:55.268 "enable_quickack": false, 00:14:55.268 "enable_placement_id": 0, 00:14:55.268 "enable_zerocopy_send_server": true, 00:14:55.268 "enable_zerocopy_send_client": false, 00:14:55.268 "zerocopy_threshold": 0, 00:14:55.268 "tls_version": 0, 00:14:55.268 "enable_ktls": false 00:14:55.268 } 00:14:55.268 }, 00:14:55.268 { 00:14:55.268 "method": "sock_impl_set_options", 00:14:55.268 "params": { 00:14:55.268 "impl_name": "posix", 00:14:55.268 "recv_buf_size": 2097152, 00:14:55.268 "send_buf_size": 2097152, 00:14:55.268 "enable_recv_pipe": true, 00:14:55.268 "enable_quickack": false, 00:14:55.268 "enable_placement_id": 0, 00:14:55.268 "enable_zerocopy_send_server": true, 00:14:55.268 "enable_zerocopy_send_client": false, 00:14:55.268 "zerocopy_threshold": 0, 00:14:55.268 "tls_version": 0, 00:14:55.268 "enable_ktls": false 00:14:55.268 } 00:14:55.268 }, 00:14:55.268 { 00:14:55.268 "method": "sock_impl_set_options", 00:14:55.268 "params": { 00:14:55.268 "impl_name": "uring", 00:14:55.268 "recv_buf_size": 2097152, 00:14:55.269 "send_buf_size": 2097152, 00:14:55.269 "enable_recv_pipe": true, 00:14:55.269 "enable_quickack": false, 00:14:55.269 "enable_placement_id": 0, 00:14:55.269 "enable_zerocopy_send_server": false, 00:14:55.269 "enable_zerocopy_send_client": false, 00:14:55.269 "zerocopy_threshold": 0, 00:14:55.269 "tls_version": 0, 00:14:55.269 "enable_ktls": false 00:14:55.269 } 00:14:55.269 } 00:14:55.269 ] 00:14:55.269 }, 00:14:55.269 { 00:14:55.269 "subsystem": "vmd", 00:14:55.269 "config": [] 00:14:55.269 }, 00:14:55.269 { 00:14:55.269 "subsystem": "accel", 00:14:55.269 "config": [ 00:14:55.269 { 00:14:55.269 "method": "accel_set_options", 00:14:55.269 "params": { 00:14:55.269 "small_cache_size": 128, 00:14:55.269 "large_cache_size": 16, 00:14:55.269 "task_count": 2048, 00:14:55.269 "sequence_count": 2048, 00:14:55.269 "buf_count": 2048 00:14:55.269 } 00:14:55.269 } 00:14:55.269 ] 00:14:55.269 }, 00:14:55.269 { 00:14:55.269 "subsystem": "bdev", 00:14:55.269 "config": [ 00:14:55.269 { 00:14:55.269 "method": "bdev_set_options", 00:14:55.269 "params": { 00:14:55.269 "bdev_io_pool_size": 65535, 00:14:55.269 "bdev_io_cache_size": 256, 00:14:55.269 "bdev_auto_examine": true, 00:14:55.269 "iobuf_small_cache_size": 128, 00:14:55.269 "iobuf_large_cache_size": 16 00:14:55.269 } 00:14:55.269 }, 00:14:55.269 { 00:14:55.269 "method": "bdev_raid_set_options", 00:14:55.269 "params": { 00:14:55.269 "process_window_size_kb": 1024 00:14:55.269 } 00:14:55.269 }, 00:14:55.269 { 00:14:55.269 "method": "bdev_iscsi_set_options", 00:14:55.269 "params": { 00:14:55.269 "timeout_sec": 30 00:14:55.269 } 00:14:55.269 }, 00:14:55.269 { 00:14:55.269 "method": "bdev_nvme_set_options", 00:14:55.269 "params": { 00:14:55.269 "action_on_timeout": "none", 00:14:55.269 "timeout_us": 0, 00:14:55.269 "timeout_admin_us": 0, 00:14:55.269 "keep_alive_timeout_ms": 10000, 00:14:55.269 "arbitration_burst": 0, 00:14:55.269 "low_priority_weight": 0, 00:14:55.269 "medium_priority_weight": 0, 00:14:55.269 "high_priority_weight": 0, 00:14:55.269 "nvme_adminq_poll_period_us": 10000, 00:14:55.269 "nvme_ioq_poll_period_us": 0, 00:14:55.269 "io_queue_requests": 512, 00:14:55.269 "delay_cmd_submit": true, 00:14:55.269 "transport_retry_count": 4, 00:14:55.269 "bdev_retry_count": 3, 00:14:55.269 "transport_ack_timeout": 0, 00:14:55.269 "ctrlr_loss_timeout_sec": 0, 00:14:55.269 "reconnect_delay_sec": 0, 00:14:55.269 "fast_io_fail_timeout_sec": 0, 00:14:55.269 "disable_auto_failback": false, 00:14:55.269 "generate_uuids": false, 00:14:55.269 "transport_tos": 0, 00:14:55.269 "nvme_error_stat": false, 00:14:55.269 "rdma_srq_size": 0, 00:14:55.269 "io_path_stat": false, 00:14:55.269 "allow_accel_sequence": false, 00:14:55.269 "rdma_max_cq_size": 0, 00:14:55.269 "rdma_cm_event_timeout_ms": 0, 00:14:55.269 "dhchap_digests": [ 00:14:55.269 "sha256", 00:14:55.269 "sha384", 00:14:55.269 "sha512" 00:14:55.269 ], 00:14:55.269 "dhchap_dhgroups": [ 00:14:55.269 "null", 00:14:55.269 "ffdhe2048", 00:14:55.269 "ffdhe3072", 00:14:55.269 "ffdhe4096", 00:14:55.269 "ffdhe6144", 00:14:55.269 "ffdhe8192" 00:14:55.269 ] 00:14:55.269 } 00:14:55.269 }, 00:14:55.269 { 00:14:55.269 "method": "bdev_nvme_attach_controller", 00:14:55.269 "params": { 00:14:55.269 "name": "nvme0", 00:14:55.269 "trtype": "TCP", 00:14:55.269 "adrfam": "IPv4", 00:14:55.269 "traddr": "10.0.0.2", 00:14:55.269 "trsvcid": "4420", 00:14:55.269 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:55.269 "prchk_reftag": false, 00:14:55.269 "prchk_guard": false, 00:14:55.269 "ctrlr_loss_timeout_sec": 0, 00:14:55.269 "reconnect_delay_sec": 0, 00:14:55.269 "fast_io_fail_timeout_sec": 0, 00:14:55.269 "psk": "key0", 00:14:55.269 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:55.269 "hdgst": false, 00:14:55.269 "ddgst": false 00:14:55.269 } 00:14:55.269 }, 00:14:55.269 { 00:14:55.269 "method": "bdev_nvme_set_hotplug", 00:14:55.269 "params": { 00:14:55.269 "period_us": 100000, 00:14:55.269 "enable": false 00:14:55.269 } 00:14:55.269 }, 00:14:55.269 { 00:14:55.269 "method": "bdev_enable_histogram", 00:14:55.269 "params": { 00:14:55.269 "name": "nvme0n1", 00:14:55.269 "enable": true 00:14:55.269 } 00:14:55.269 }, 00:14:55.269 { 00:14:55.269 "method": "bdev_wait_for_examine" 00:14:55.269 } 00:14:55.269 ] 00:14:55.269 }, 00:14:55.269 { 00:14:55.269 "subsystem": "nbd", 00:14:55.269 "config": [] 00:14:55.269 } 00:14:55.269 ] 00:14:55.269 }' 00:14:55.269 08:27:47 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 74177 00:14:55.269 08:27:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 74177 ']' 00:14:55.269 08:27:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 74177 00:14:55.269 08:27:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:55.269 08:27:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:55.269 08:27:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74177 00:14:55.269 killing process with pid 74177 00:14:55.269 Received shutdown signal, test time was about 1.000000 seconds 00:14:55.269 00:14:55.269 Latency(us) 00:14:55.269 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:55.269 =================================================================================================================== 00:14:55.269 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:55.269 08:27:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:55.269 08:27:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:55.269 08:27:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74177' 00:14:55.269 08:27:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 74177 00:14:55.269 08:27:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 74177 00:14:55.527 08:27:47 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 74145 00:14:55.527 08:27:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 74145 ']' 00:14:55.527 08:27:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 74145 00:14:55.527 08:27:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:55.527 08:27:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:55.527 08:27:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74145 00:14:55.527 killing process with pid 74145 00:14:55.527 08:27:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:55.527 08:27:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:55.527 08:27:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74145' 00:14:55.527 08:27:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 74145 00:14:55.527 08:27:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 74145 00:14:55.785 08:27:47 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:14:55.785 08:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:55.785 08:27:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:55.785 08:27:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:55.785 08:27:47 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:14:55.785 "subsystems": [ 00:14:55.785 { 00:14:55.785 "subsystem": "keyring", 00:14:55.785 "config": [ 00:14:55.785 { 00:14:55.785 "method": "keyring_file_add_key", 00:14:55.785 "params": { 00:14:55.785 "name": "key0", 00:14:55.785 "path": "/tmp/tmp.nNGlligVhF" 00:14:55.785 } 00:14:55.785 } 00:14:55.785 ] 00:14:55.785 }, 00:14:55.785 { 00:14:55.785 "subsystem": "iobuf", 00:14:55.785 "config": [ 00:14:55.785 { 00:14:55.785 "method": "iobuf_set_options", 00:14:55.785 "params": { 00:14:55.785 "small_pool_count": 8192, 00:14:55.785 "large_pool_count": 1024, 00:14:55.785 "small_bufsize": 8192, 00:14:55.785 "large_bufsize": 135168 00:14:55.785 } 00:14:55.785 } 00:14:55.785 ] 00:14:55.785 }, 00:14:55.785 { 00:14:55.785 "subsystem": "sock", 00:14:55.785 "config": [ 00:14:55.785 { 00:14:55.785 "method": "sock_set_default_impl", 00:14:55.785 "params": { 00:14:55.785 "impl_name": "uring" 00:14:55.785 } 00:14:55.785 }, 00:14:55.785 { 00:14:55.785 "method": "sock_impl_set_options", 00:14:55.785 "params": { 00:14:55.785 "impl_name": "ssl", 00:14:55.785 "recv_buf_size": 4096, 00:14:55.785 "send_buf_size": 4096, 00:14:55.785 "enable_recv_pipe": true, 00:14:55.785 "enable_quickack": false, 00:14:55.785 "enable_placement_id": 0, 00:14:55.785 "enable_zerocopy_send_server": true, 00:14:55.785 "enable_zerocopy_send_client": false, 00:14:55.785 "zerocopy_threshold": 0, 00:14:55.785 "tls_version": 0, 00:14:55.785 "enable_ktls": false 00:14:55.785 } 00:14:55.785 }, 00:14:55.785 { 00:14:55.785 "method": "sock_impl_set_options", 00:14:55.785 "params": { 00:14:55.785 "impl_name": "posix", 00:14:55.785 "recv_buf_size": 2097152, 00:14:55.785 "send_buf_size": 2097152, 00:14:55.785 "enable_recv_pipe": true, 00:14:55.785 "enable_quickack": false, 00:14:55.785 "enable_placement_id": 0, 00:14:55.785 "enable_zerocopy_send_server": true, 00:14:55.785 "enable_zerocopy_send_client": false, 00:14:55.785 "zerocopy_threshold": 0, 00:14:55.785 "tls_version": 0, 00:14:55.785 "enable_ktls": false 00:14:55.785 } 00:14:55.785 }, 00:14:55.785 { 00:14:55.785 "method": "sock_impl_set_options", 00:14:55.785 "params": { 00:14:55.785 "impl_name": "uring", 00:14:55.785 "recv_buf_size": 2097152, 00:14:55.785 "send_buf_size": 2097152, 00:14:55.785 "enable_recv_pipe": true, 00:14:55.785 "enable_quickack": false, 00:14:55.785 "enable_placement_id": 0, 00:14:55.785 "enable_zerocopy_send_server": false, 00:14:55.785 "enable_zerocopy_send_client": false, 00:14:55.785 "zerocopy_threshold": 0, 00:14:55.785 "tls_version": 0, 00:14:55.785 "enable_ktls": false 00:14:55.785 } 00:14:55.785 } 00:14:55.785 ] 00:14:55.785 }, 00:14:55.785 { 00:14:55.785 "subsystem": "vmd", 00:14:55.785 "config": [] 00:14:55.785 }, 00:14:55.785 { 00:14:55.785 "subsystem": "accel", 00:14:55.785 "config": [ 00:14:55.785 { 00:14:55.785 "method": "accel_set_options", 00:14:55.785 "params": { 00:14:55.785 "small_cache_size": 128, 00:14:55.785 "large_cache_size": 16, 00:14:55.785 "task_count": 2048, 00:14:55.785 "sequence_count": 2048, 00:14:55.785 "buf_count": 2048 00:14:55.785 } 00:14:55.785 } 00:14:55.785 ] 00:14:55.785 }, 00:14:55.785 { 00:14:55.785 "subsystem": "bdev", 00:14:55.785 "config": [ 00:14:55.785 { 00:14:55.785 "method": "bdev_set_options", 00:14:55.785 "params": { 00:14:55.785 "bdev_io_pool_size": 65535, 00:14:55.785 "bdev_io_cache_size": 256, 00:14:55.785 "bdev_auto_examine": true, 00:14:55.785 "iobuf_small_cache_size": 128, 00:14:55.785 "iobuf_large_cache_size": 16 00:14:55.785 } 00:14:55.785 }, 00:14:55.785 { 00:14:55.786 "method": "bdev_raid_set_options", 00:14:55.786 "params": { 00:14:55.786 "process_window_size_kb": 1024 00:14:55.786 } 00:14:55.786 }, 00:14:55.786 { 00:14:55.786 "method": "bdev_iscsi_set_options", 00:14:55.786 "params": { 00:14:55.786 "timeout_sec": 30 00:14:55.786 } 00:14:55.786 }, 00:14:55.786 { 00:14:55.786 "method": "bdev_nvme_set_options", 00:14:55.786 "params": { 00:14:55.786 "action_on_timeout": "none", 00:14:55.786 "timeout_us": 0, 00:14:55.786 "timeout_admin_us": 0, 00:14:55.786 "keep_alive_timeout_ms": 10000, 00:14:55.786 "arbitration_burst": 0, 00:14:55.786 "low_priority_weight": 0, 00:14:55.786 "medium_priority_weight": 0, 00:14:55.786 "high_priority_weight": 0, 00:14:55.786 "nvme_adminq_poll_period_us": 10000, 00:14:55.786 "nvme_ioq_poll_period_us": 0, 00:14:55.786 "io_queue_requests": 0, 00:14:55.786 "delay_cmd_submit": true, 00:14:55.786 "transport_retry_count": 4, 00:14:55.786 "bdev_retry_count": 3, 00:14:55.786 "transport_ack_timeout": 0, 00:14:55.786 "ctrlr_loss_timeout_sec": 0, 00:14:55.786 "reconnect_delay_sec": 0, 00:14:55.786 "fast_io_fail_timeout_sec": 0, 00:14:55.786 "disable_auto_failback": false, 00:14:55.786 "generate_uuids": false, 00:14:55.786 "transport_tos": 0, 00:14:55.786 "nvme_error_stat": false, 00:14:55.786 "rdma_srq_size": 0, 00:14:55.786 "io_path_stat": false, 00:14:55.786 "allow_accel_sequence": false, 00:14:55.786 "rdma_max_cq_size": 0, 00:14:55.786 "rdma_cm_event_timeout_ms": 0, 00:14:55.786 "dhchap_digests": [ 00:14:55.786 "sha256", 00:14:55.786 "sha384", 00:14:55.786 "sha512" 00:14:55.786 ], 00:14:55.786 "dhchap_dhgroups": [ 00:14:55.786 "null", 00:14:55.786 "ffdhe2048", 00:14:55.786 "ffdhe3072", 00:14:55.786 "ffdhe4096", 00:14:55.786 "ffdhe6144", 00:14:55.786 "ffdhe8192" 00:14:55.786 ] 00:14:55.786 } 00:14:55.786 }, 00:14:55.786 { 00:14:55.786 "method": "bdev_nvme_set_hotplug", 00:14:55.786 "params": { 00:14:55.786 "period_us": 100000, 00:14:55.786 "enable": false 00:14:55.786 } 00:14:55.786 }, 00:14:55.786 { 00:14:55.786 "method": "bdev_malloc_create", 00:14:55.786 "params": { 00:14:55.786 "name": "malloc0", 00:14:55.786 "num_blocks": 8192, 00:14:55.786 "block_size": 4096, 00:14:55.786 "physical_block_size": 4096, 00:14:55.786 "uuid": "dc0d9ead-a697-4810-9adb-393f61c9668b", 00:14:55.786 "optimal_io_boundary": 0 00:14:55.786 } 00:14:55.786 }, 00:14:55.786 { 00:14:55.786 "method": "bdev_wait_for_examine" 00:14:55.786 } 00:14:55.786 ] 00:14:55.786 }, 00:14:55.786 { 00:14:55.786 "subsystem": "nbd", 00:14:55.786 "config": [] 00:14:55.786 }, 00:14:55.786 { 00:14:55.786 "subsystem": "scheduler", 00:14:55.786 "config": [ 00:14:55.786 { 00:14:55.786 "method": "framework_set_scheduler", 00:14:55.786 "params": { 00:14:55.786 "name": "static" 00:14:55.786 } 00:14:55.786 } 00:14:55.786 ] 00:14:55.786 }, 00:14:55.786 { 00:14:55.786 "subsystem": "nvmf", 00:14:55.786 "config": [ 00:14:55.786 { 00:14:55.786 "method": "nvmf_set_config", 00:14:55.786 "params": { 00:14:55.786 "discovery_filter": "match_any", 00:14:55.786 "admin_cmd_passthru": { 00:14:55.786 "identify_ctrlr": false 00:14:55.786 } 00:14:55.786 } 00:14:55.786 }, 00:14:55.786 { 00:14:55.786 "method": "nvmf_set_max_subsystems", 00:14:55.786 "params": { 00:14:55.786 "max_subsystems": 1024 00:14:55.786 } 00:14:55.786 }, 00:14:55.786 { 00:14:55.786 "method": "nvmf_set_crdt", 00:14:55.786 "params": { 00:14:55.786 "crdt1": 0, 00:14:55.786 "crdt2": 0, 00:14:55.786 "crdt3": 0 00:14:55.786 } 00:14:55.786 }, 00:14:55.786 { 00:14:55.786 "method": "nvmf_create_transport", 00:14:55.786 "params": { 00:14:55.786 "trtype": "TCP", 00:14:55.786 "max_queue_depth": 128, 00:14:55.786 "max_io_qpairs_per_ctrlr": 127, 00:14:55.786 "in_capsule_data_size": 4096, 00:14:55.786 "max_io_size": 131072, 00:14:55.786 "io_unit_size": 131072, 00:14:55.786 "max_aq_depth": 128, 00:14:55.786 "num_shared_buffers": 511, 00:14:55.786 "buf_cache_size": 4294967295, 00:14:55.786 "dif_insert_or_strip": false, 00:14:55.786 "zcopy": false, 00:14:55.786 "c2h_success": false, 00:14:55.786 "sock_priority": 0, 00:14:55.786 "abort_timeout_sec": 1, 00:14:55.786 "ack_timeout": 0, 00:14:55.786 "data_wr_pool_size": 0 00:14:55.786 } 00:14:55.786 }, 00:14:55.786 { 00:14:55.786 "method": "nvmf_create_subsystem", 00:14:55.786 "params": { 00:14:55.786 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:55.786 "allow_any_host": false, 00:14:55.786 "serial_number": "00000000000000000000", 00:14:55.786 "model_number": "SPDK bdev Controller", 00:14:55.786 "max_namespaces": 32, 00:14:55.786 "min_cntlid": 1, 00:14:55.786 "max_cntlid": 65519, 00:14:55.786 "ana_reporting": false 00:14:55.786 } 00:14:55.786 }, 00:14:55.786 { 00:14:55.786 "method": "nvmf_subsystem_add_host", 00:14:55.786 "params": { 00:14:55.786 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:55.786 "host": "nqn.2016-06.io.spdk:host1", 00:14:55.786 "psk": "key0" 00:14:55.786 } 00:14:55.786 }, 00:14:55.786 { 00:14:55.786 "method": "nvmf_subsystem_add_ns", 00:14:55.786 "params": { 00:14:55.786 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:55.786 "namespace": { 00:14:55.786 "nsid": 1, 00:14:55.786 "bdev_name": "malloc0", 00:14:55.786 "nguid": "DC0D9EADA69748109ADB393F61C9668B", 00:14:55.786 "uuid": "dc0d9ead-a697-4810-9adb-393f61c9668b", 00:14:55.786 "no_auto_visible": false 00:14:55.786 } 00:14:55.786 } 00:14:55.786 }, 00:14:55.786 { 00:14:55.786 "method": "nvmf_subsystem_add_listener", 00:14:55.786 "params": { 00:14:55.786 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:55.786 "listen_address": { 00:14:55.786 "trtype": "TCP", 00:14:55.786 "adrfam": "IPv4", 00:14:55.786 "traddr": "10.0.0.2", 00:14:55.786 "trsvcid": "4420" 00:14:55.786 }, 00:14:55.786 "secure_channel": true 00:14:55.786 } 00:14:55.786 } 00:14:55.786 ] 00:14:55.786 } 00:14:55.786 ] 00:14:55.786 }' 00:14:55.786 08:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=74238 00:14:55.786 08:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 74238 00:14:55.786 08:27:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 74238 ']' 00:14:55.786 08:27:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:55.786 08:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:14:55.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:55.786 08:27:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:55.786 08:27:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:55.786 08:27:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:55.786 08:27:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:55.786 [2024-07-15 08:27:47.951421] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:14:55.786 [2024-07-15 08:27:47.951525] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:56.043 [2024-07-15 08:27:48.088458] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:56.043 [2024-07-15 08:27:48.200700] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:56.043 [2024-07-15 08:27:48.200767] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:56.043 [2024-07-15 08:27:48.200780] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:56.043 [2024-07-15 08:27:48.200789] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:56.043 [2024-07-15 08:27:48.200796] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:56.043 [2024-07-15 08:27:48.200882] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:56.299 [2024-07-15 08:27:48.366652] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:56.299 [2024-07-15 08:27:48.444778] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:56.556 [2024-07-15 08:27:48.476695] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:56.556 [2024-07-15 08:27:48.476929] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:56.813 08:27:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:56.813 08:27:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:56.813 08:27:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:56.813 08:27:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:56.813 08:27:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:57.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:57.070 08:27:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:57.070 08:27:49 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=74270 00:14:57.070 08:27:49 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 74270 /var/tmp/bdevperf.sock 00:14:57.070 08:27:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 74270 ']' 00:14:57.070 08:27:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:57.070 08:27:49 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:14:57.070 08:27:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:57.070 08:27:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:57.070 08:27:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:57.070 08:27:49 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:14:57.070 "subsystems": [ 00:14:57.070 { 00:14:57.070 "subsystem": "keyring", 00:14:57.070 "config": [ 00:14:57.070 { 00:14:57.070 "method": "keyring_file_add_key", 00:14:57.070 "params": { 00:14:57.070 "name": "key0", 00:14:57.070 "path": "/tmp/tmp.nNGlligVhF" 00:14:57.070 } 00:14:57.070 } 00:14:57.070 ] 00:14:57.070 }, 00:14:57.070 { 00:14:57.070 "subsystem": "iobuf", 00:14:57.070 "config": [ 00:14:57.070 { 00:14:57.070 "method": "iobuf_set_options", 00:14:57.070 "params": { 00:14:57.070 "small_pool_count": 8192, 00:14:57.070 "large_pool_count": 1024, 00:14:57.070 "small_bufsize": 8192, 00:14:57.070 "large_bufsize": 135168 00:14:57.070 } 00:14:57.070 } 00:14:57.070 ] 00:14:57.070 }, 00:14:57.070 { 00:14:57.070 "subsystem": "sock", 00:14:57.070 "config": [ 00:14:57.070 { 00:14:57.070 "method": "sock_set_default_impl", 00:14:57.070 "params": { 00:14:57.070 "impl_name": "uring" 00:14:57.070 } 00:14:57.070 }, 00:14:57.070 { 00:14:57.070 "method": "sock_impl_set_options", 00:14:57.070 "params": { 00:14:57.070 "impl_name": "ssl", 00:14:57.070 "recv_buf_size": 4096, 00:14:57.070 "send_buf_size": 4096, 00:14:57.070 "enable_recv_pipe": true, 00:14:57.070 "enable_quickack": false, 00:14:57.070 "enable_placement_id": 0, 00:14:57.070 "enable_zerocopy_send_server": true, 00:14:57.070 "enable_zerocopy_send_client": false, 00:14:57.070 "zerocopy_threshold": 0, 00:14:57.070 "tls_version": 0, 00:14:57.070 "enable_ktls": false 00:14:57.070 } 00:14:57.070 }, 00:14:57.070 { 00:14:57.070 "method": "sock_impl_set_options", 00:14:57.070 "params": { 00:14:57.070 "impl_name": "posix", 00:14:57.070 "recv_buf_size": 2097152, 00:14:57.070 "send_buf_size": 2097152, 00:14:57.070 "enable_recv_pipe": true, 00:14:57.070 "enable_quickack": false, 00:14:57.070 "enable_placement_id": 0, 00:14:57.070 "enable_zerocopy_send_server": true, 00:14:57.070 "enable_zerocopy_send_client": false, 00:14:57.070 "zerocopy_threshold": 0, 00:14:57.070 "tls_version": 0, 00:14:57.070 "enable_ktls": false 00:14:57.070 } 00:14:57.070 }, 00:14:57.070 { 00:14:57.070 "method": "sock_impl_set_options", 00:14:57.070 "params": { 00:14:57.070 "impl_name": "uring", 00:14:57.070 "recv_buf_size": 2097152, 00:14:57.070 "send_buf_size": 2097152, 00:14:57.070 "enable_recv_pipe": true, 00:14:57.070 "enable_quickack": false, 00:14:57.070 "enable_placement_id": 0, 00:14:57.070 "enable_zerocopy_send_server": false, 00:14:57.070 "enable_zerocopy_send_client": false, 00:14:57.070 "zerocopy_threshold": 0, 00:14:57.070 "tls_version": 0, 00:14:57.070 "enable_ktls": false 00:14:57.070 } 00:14:57.070 } 00:14:57.070 ] 00:14:57.070 }, 00:14:57.070 { 00:14:57.070 "subsystem": "vmd", 00:14:57.070 "config": [] 00:14:57.070 }, 00:14:57.070 { 00:14:57.070 "subsystem": "accel", 00:14:57.070 "config": [ 00:14:57.070 { 00:14:57.070 "method": "accel_set_options", 00:14:57.070 "params": { 00:14:57.070 "small_cache_size": 128, 00:14:57.070 "large_cache_size": 16, 00:14:57.070 "task_count": 2048, 00:14:57.070 "sequence_count": 2048, 00:14:57.070 "buf_count": 2048 00:14:57.070 } 00:14:57.070 } 00:14:57.070 ] 00:14:57.070 }, 00:14:57.070 { 00:14:57.070 "subsystem": "bdev", 00:14:57.070 "config": [ 00:14:57.070 { 00:14:57.070 "method": "bdev_set_options", 00:14:57.070 "params": { 00:14:57.070 "bdev_io_pool_size": 65535, 00:14:57.070 "bdev_io_cache_size": 256, 00:14:57.070 "bdev_auto_examine": true, 00:14:57.070 "iobuf_small_cache_size": 128, 00:14:57.070 "iobuf_large_cache_size": 16 00:14:57.070 } 00:14:57.070 }, 00:14:57.070 { 00:14:57.070 "method": "bdev_raid_set_options", 00:14:57.070 "params": { 00:14:57.070 "process_window_size_kb": 1024 00:14:57.070 } 00:14:57.070 }, 00:14:57.070 { 00:14:57.070 "method": "bdev_iscsi_set_options", 00:14:57.070 "params": { 00:14:57.070 "timeout_sec": 30 00:14:57.070 } 00:14:57.070 }, 00:14:57.070 { 00:14:57.070 "method": "bdev_nvme_set_options", 00:14:57.070 "params": { 00:14:57.070 "action_on_timeout": "none", 00:14:57.070 "timeout_us": 0, 00:14:57.071 "timeout_admin_us": 0, 00:14:57.071 "keep_alive_timeout_ms": 10000, 00:14:57.071 "arbitration_burst": 0, 00:14:57.071 "low_priority_weight": 0, 00:14:57.071 "medium_priority_weight": 0, 00:14:57.071 "high_priority_weight": 0, 00:14:57.071 "nvme_adminq_poll_period_us": 10000, 00:14:57.071 "nvme_ioq_poll_period_us": 0, 00:14:57.071 "io_queue_requests": 512, 00:14:57.071 "delay_cmd_submit": true, 00:14:57.071 "transport_retry_count": 4, 00:14:57.071 "bdev_retry_count": 3, 00:14:57.071 "transport_ack_timeout": 0, 00:14:57.071 "ctrlr_loss_timeout_sec": 0, 00:14:57.071 "reconnect_delay_sec": 0, 00:14:57.071 "fast_io_fail_timeout_sec": 0, 00:14:57.071 "disable_auto_failback": false, 00:14:57.071 "generate_uuids": false, 00:14:57.071 "transport_tos": 0, 00:14:57.071 "nvme_error_stat": false, 00:14:57.071 "rdma_srq_size": 0, 00:14:57.071 "io_path_stat": false, 00:14:57.071 "allow_accel_sequence": false, 00:14:57.071 "rdma_max_cq_size": 0, 00:14:57.071 "rdma_cm_event_timeout_ms": 0, 00:14:57.071 "dhchap_digests": [ 00:14:57.071 "sha256", 00:14:57.071 "sha384", 00:14:57.071 "sha512" 00:14:57.071 ], 00:14:57.071 "dhchap_dhgroups": [ 00:14:57.071 "null", 00:14:57.071 "ffdhe2048", 00:14:57.071 "ffdhe3072", 00:14:57.071 "ffdhe4096", 00:14:57.071 "ffdhe6144", 00:14:57.071 "ffdhe8192" 00:14:57.071 ] 00:14:57.071 } 00:14:57.071 }, 00:14:57.071 { 00:14:57.071 "method": "bdev_nvme_attach_controller", 00:14:57.071 "params": { 00:14:57.071 "name": "nvme0", 00:14:57.071 "trtype": "TCP", 00:14:57.071 "adrfam": "IPv4", 00:14:57.071 "traddr": "10.0.0.2", 00:14:57.071 "trsvcid": "4420", 00:14:57.071 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:57.071 "prchk_reftag": false, 00:14:57.071 "prchk_guard": false, 00:14:57.071 "ctrlr_loss_timeout_sec": 0, 00:14:57.071 "reconnect_delay_sec": 0, 00:14:57.071 "fast_io_fail_timeout_sec": 0, 00:14:57.071 "psk": "key0", 00:14:57.071 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:57.071 "hdgst": false, 00:14:57.071 "ddgst": false 00:14:57.071 } 00:14:57.071 }, 00:14:57.071 { 00:14:57.071 "method": "bdev_nvme_set_hotplug", 00:14:57.071 "params": { 00:14:57.071 "period_us": 100000, 00:14:57.071 "enable": false 00:14:57.071 } 00:14:57.071 }, 00:14:57.071 { 00:14:57.071 "method": "bdev_enable_histogram", 00:14:57.071 "params": { 00:14:57.071 "name": "nvme0n1", 00:14:57.071 "enable": true 00:14:57.071 } 00:14:57.071 }, 00:14:57.071 { 00:14:57.071 "method": "bdev_wait_for_examine" 00:14:57.071 } 00:14:57.071 ] 00:14:57.071 }, 00:14:57.071 { 00:14:57.071 "subsystem": "nbd", 00:14:57.071 "config": [] 00:14:57.071 } 00:14:57.071 ] 00:14:57.071 }' 00:14:57.071 08:27:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:57.071 [2024-07-15 08:27:49.057256] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:14:57.071 [2024-07-15 08:27:49.057368] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74270 ] 00:14:57.071 [2024-07-15 08:27:49.192940] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:57.357 [2024-07-15 08:27:49.334069] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:57.357 [2024-07-15 08:27:49.474486] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:57.633 [2024-07-15 08:27:49.526806] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:58.199 08:27:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:58.199 08:27:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:58.199 08:27:50 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:58.199 08:27:50 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:14:58.456 08:27:50 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:58.456 08:27:50 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:58.456 Running I/O for 1 seconds... 00:14:59.388 00:14:59.388 Latency(us) 00:14:59.388 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:59.388 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:59.388 Verification LBA range: start 0x0 length 0x2000 00:14:59.388 nvme0n1 : 1.02 3338.33 13.04 0.00 0.00 38043.37 5779.08 33363.78 00:14:59.388 =================================================================================================================== 00:14:59.388 Total : 3338.33 13.04 0.00 0.00 38043.37 5779.08 33363.78 00:14:59.388 0 00:14:59.645 08:27:51 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:14:59.645 08:27:51 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:14:59.645 08:27:51 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:14:59.645 08:27:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:14:59.645 08:27:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:14:59.645 08:27:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:14:59.645 08:27:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:59.645 08:27:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:14:59.645 08:27:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:14:59.645 08:27:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:14:59.645 08:27:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:59.645 nvmf_trace.0 00:14:59.645 08:27:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:14:59.645 08:27:51 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 74270 00:14:59.645 08:27:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 74270 ']' 00:14:59.645 08:27:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 74270 00:14:59.645 08:27:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:59.645 08:27:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:59.645 08:27:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74270 00:14:59.645 killing process with pid 74270 00:14:59.645 Received shutdown signal, test time was about 1.000000 seconds 00:14:59.645 00:14:59.645 Latency(us) 00:14:59.645 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:59.645 =================================================================================================================== 00:14:59.645 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:59.645 08:27:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:59.645 08:27:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:59.645 08:27:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74270' 00:14:59.645 08:27:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 74270 00:14:59.645 08:27:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 74270 00:14:59.902 08:27:51 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:14:59.902 08:27:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:59.902 08:27:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:14:59.902 08:27:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:59.902 08:27:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:14:59.902 08:27:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:59.902 08:27:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:59.902 rmmod nvme_tcp 00:14:59.902 rmmod nvme_fabrics 00:14:59.902 rmmod nvme_keyring 00:14:59.902 08:27:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:59.902 08:27:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:14:59.902 08:27:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:14:59.902 08:27:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 74238 ']' 00:14:59.902 08:27:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 74238 00:14:59.902 08:27:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 74238 ']' 00:14:59.902 08:27:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 74238 00:14:59.902 08:27:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:59.902 08:27:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:59.902 08:27:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74238 00:15:00.159 killing process with pid 74238 00:15:00.159 08:27:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:00.159 08:27:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:00.159 08:27:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74238' 00:15:00.159 08:27:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 74238 00:15:00.159 08:27:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 74238 00:15:00.159 08:27:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:00.159 08:27:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:00.159 08:27:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:00.159 08:27:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:00.159 08:27:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:00.159 08:27:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:00.159 08:27:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:00.159 08:27:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:00.417 08:27:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:00.417 08:27:52 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.uYzh5S3Jyz /tmp/tmp.pG0ejOWI81 /tmp/tmp.nNGlligVhF 00:15:00.417 00:15:00.417 real 1m28.172s 00:15:00.417 user 2m22.027s 00:15:00.417 sys 0m27.501s 00:15:00.417 ************************************ 00:15:00.417 END TEST nvmf_tls 00:15:00.417 ************************************ 00:15:00.417 08:27:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:00.417 08:27:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:00.417 08:27:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:00.417 08:27:52 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:15:00.417 08:27:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:00.417 08:27:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:00.417 08:27:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:00.417 ************************************ 00:15:00.417 START TEST nvmf_fips 00:15:00.417 ************************************ 00:15:00.417 08:27:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:15:00.417 * Looking for test storage... 00:15:00.417 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:15:00.417 08:27:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:00.417 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:15:00.417 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:00.417 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:00.417 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:00.417 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:00.417 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:00.417 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:00.417 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:00.417 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:00.417 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:00.417 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:00.417 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:15:00.417 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:15:00.417 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:00.417 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:00.417 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:00.417 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:00.417 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:00.417 08:27:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:00.417 08:27:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:00.417 08:27:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:00.417 08:27:52 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.418 08:27:52 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.418 08:27:52 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.418 08:27:52 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:15:00.418 08:27:52 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.418 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:15:00.418 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:00.418 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:00.418 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:00.418 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:00.418 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:00.418 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:00.418 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:00.418 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:00.418 08:27:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:00.418 08:27:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:15:00.418 08:27:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:15:00.418 08:27:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:15:00.418 08:27:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:15:00.418 08:27:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:15:00.418 08:27:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:15:00.418 08:27:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:15:00.418 08:27:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:15:00.418 08:27:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:15:00.418 08:27:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:15:00.418 08:27:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:15:00.418 08:27:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:15:00.418 08:27:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:15:00.418 08:27:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:15:00.418 08:27:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:15:00.418 08:27:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:15:00.418 08:27:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:15:00.418 08:27:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:15:00.418 08:27:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:15:00.418 08:27:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:00.418 08:27:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:15:00.418 08:27:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:15:00.418 08:27:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:15:00.418 08:27:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:15:00.418 08:27:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:15:00.418 08:27:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:15:00.418 08:27:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:15:00.418 08:27:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:15:00.418 08:27:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:15:00.418 08:27:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:15:00.418 08:27:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:15:00.418 08:27:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:15:00.418 08:27:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:15:00.418 08:27:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:00.418 08:27:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:15:00.418 08:27:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:15:00.418 08:27:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:00.418 08:27:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:15:00.418 08:27:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:15:00.418 08:27:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:15:00.418 08:27:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:15:00.418 08:27:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:00.418 08:27:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:15:00.418 08:27:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:15:00.418 08:27:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:15:00.418 08:27:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:15:00.418 08:27:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:15:00.418 08:27:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:00.418 08:27:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:15:00.418 08:27:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:15:00.418 08:27:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:15:00.418 08:27:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:15:00.418 08:27:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:15:00.418 08:27:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:15:00.418 08:27:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:15:00.418 08:27:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:00.418 08:27:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:15:00.418 08:27:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:15:00.418 08:27:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:15:00.418 08:27:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:15:00.418 08:27:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:15:00.418 08:27:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:15:00.418 08:27:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:15:00.418 08:27:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:15:00.418 08:27:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:15:00.418 08:27:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:15:00.418 08:27:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:15:00.418 08:27:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:15:00.418 08:27:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:15:00.418 08:27:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:15:00.418 08:27:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:15:00.418 08:27:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:15:00.418 08:27:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:15:00.418 08:27:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:15:00.418 08:27:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:15:00.418 08:27:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:15:00.676 08:27:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:15:00.676 08:27:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:15:00.676 08:27:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:15:00.676 08:27:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:15:00.676 08:27:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:15:00.676 08:27:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:15:00.676 08:27:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:15:00.676 08:27:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:15:00.676 08:27:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:00.676 08:27:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:15:00.676 08:27:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:00.676 08:27:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:15:00.676 08:27:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:00.676 08:27:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:15:00.676 08:27:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:15:00.676 08:27:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:15:00.676 Error setting digest 00:15:00.676 00A2C2AE467F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:15:00.676 00A2C2AE467F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:15:00.676 08:27:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:15:00.676 08:27:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:00.676 08:27:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:00.676 08:27:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:00.676 08:27:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:15:00.676 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:00.676 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:00.676 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:00.676 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:00.676 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:00.676 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:00.676 08:27:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:00.676 08:27:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:00.676 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:00.676 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:00.676 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:00.677 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:00.677 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:00.677 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:00.677 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:00.677 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:00.677 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:00.677 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:00.677 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:00.677 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:00.677 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:00.677 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:00.677 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:00.677 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:00.677 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:00.677 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:00.677 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:00.677 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:00.677 Cannot find device "nvmf_tgt_br" 00:15:00.677 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # true 00:15:00.677 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:00.677 Cannot find device "nvmf_tgt_br2" 00:15:00.677 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # true 00:15:00.677 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:00.677 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:00.677 Cannot find device "nvmf_tgt_br" 00:15:00.677 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # true 00:15:00.677 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:00.677 Cannot find device "nvmf_tgt_br2" 00:15:00.677 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # true 00:15:00.677 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:00.677 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:00.677 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:00.677 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:00.677 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # true 00:15:00.677 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:00.677 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:00.677 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # true 00:15:00.677 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:00.677 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:00.677 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:00.677 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:00.677 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:00.677 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:00.677 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:00.677 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:00.677 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:00.677 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:00.677 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:00.677 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:00.677 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:00.935 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:00.935 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:00.935 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:00.935 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:00.935 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:00.935 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:00.935 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:00.935 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:00.935 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:00.935 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:00.935 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:00.935 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:00.935 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.101 ms 00:15:00.935 00:15:00.935 --- 10.0.0.2 ping statistics --- 00:15:00.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:00.935 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:15:00.935 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:00.935 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:00.935 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:15:00.935 00:15:00.935 --- 10.0.0.3 ping statistics --- 00:15:00.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:00.935 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:15:00.935 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:00.935 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:00.935 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.054 ms 00:15:00.935 00:15:00.935 --- 10.0.0.1 ping statistics --- 00:15:00.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:00.935 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:15:00.935 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:00.935 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@433 -- # return 0 00:15:00.935 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:00.935 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:00.935 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:00.935 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:00.935 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:00.935 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:00.935 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:00.935 08:27:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:15:00.935 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:00.935 08:27:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:00.935 08:27:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:00.935 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=74545 00:15:00.935 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:00.935 08:27:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 74545 00:15:00.935 08:27:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 74545 ']' 00:15:00.935 08:27:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:00.935 08:27:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:00.935 08:27:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:00.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:00.935 08:27:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:00.935 08:27:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:00.935 [2024-07-15 08:27:53.090120] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:15:00.935 [2024-07-15 08:27:53.090270] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:01.192 [2024-07-15 08:27:53.231220] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:01.515 [2024-07-15 08:27:53.378206] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:01.515 [2024-07-15 08:27:53.378294] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:01.515 [2024-07-15 08:27:53.378315] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:01.515 [2024-07-15 08:27:53.378329] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:01.515 [2024-07-15 08:27:53.378341] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:01.515 [2024-07-15 08:27:53.378386] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:01.515 [2024-07-15 08:27:53.434192] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:02.081 08:27:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:02.081 08:27:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:15:02.081 08:27:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:02.081 08:27:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:02.081 08:27:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:02.081 08:27:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:02.081 08:27:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:15:02.081 08:27:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:15:02.081 08:27:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:15:02.081 08:27:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:15:02.081 08:27:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:15:02.081 08:27:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:15:02.081 08:27:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:15:02.081 08:27:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:02.339 [2024-07-15 08:27:54.274866] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:02.339 [2024-07-15 08:27:54.290798] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:02.339 [2024-07-15 08:27:54.291012] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:02.339 [2024-07-15 08:27:54.321776] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:02.339 malloc0 00:15:02.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:02.339 08:27:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:02.339 08:27:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=74583 00:15:02.339 08:27:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:02.339 08:27:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 74583 /var/tmp/bdevperf.sock 00:15:02.339 08:27:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 74583 ']' 00:15:02.339 08:27:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:02.339 08:27:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:02.339 08:27:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:02.339 08:27:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:02.339 08:27:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:02.339 [2024-07-15 08:27:54.432471] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:15:02.339 [2024-07-15 08:27:54.432572] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74583 ] 00:15:02.596 [2024-07-15 08:27:54.569691] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:02.596 [2024-07-15 08:27:54.683366] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:02.596 [2024-07-15 08:27:54.735768] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:03.162 08:27:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:03.162 08:27:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:15:03.162 08:27:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:15:03.727 [2024-07-15 08:27:55.597583] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:03.727 [2024-07-15 08:27:55.597714] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:03.727 TLSTESTn1 00:15:03.727 08:27:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:03.727 Running I/O for 10 seconds... 00:15:13.753 00:15:13.753 Latency(us) 00:15:13.753 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:13.753 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:13.753 Verification LBA range: start 0x0 length 0x2000 00:15:13.753 TLSTESTn1 : 10.03 3943.60 15.40 0.00 0.00 32387.79 9830.40 29550.78 00:15:13.753 =================================================================================================================== 00:15:13.753 Total : 3943.60 15.40 0.00 0.00 32387.79 9830.40 29550.78 00:15:13.753 0 00:15:13.753 08:28:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:15:13.753 08:28:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:15:13.753 08:28:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:15:13.753 08:28:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:15:13.753 08:28:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:15:13.753 08:28:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:13.753 08:28:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:15:13.753 08:28:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:15:13.753 08:28:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:15:13.753 08:28:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:13.753 nvmf_trace.0 00:15:14.021 08:28:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:15:14.021 08:28:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 74583 00:15:14.021 08:28:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 74583 ']' 00:15:14.021 08:28:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 74583 00:15:14.021 08:28:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:15:14.021 08:28:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:14.021 08:28:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74583 00:15:14.021 killing process with pid 74583 00:15:14.021 Received shutdown signal, test time was about 10.000000 seconds 00:15:14.021 00:15:14.021 Latency(us) 00:15:14.021 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:14.021 =================================================================================================================== 00:15:14.021 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:14.021 08:28:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:15:14.021 08:28:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:15:14.021 08:28:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74583' 00:15:14.021 08:28:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 74583 00:15:14.021 [2024-07-15 08:28:05.973026] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:14.021 08:28:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 74583 00:15:14.279 08:28:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:15:14.279 08:28:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:14.279 08:28:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:15:14.279 08:28:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:14.279 08:28:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:15:14.279 08:28:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:14.279 08:28:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:14.279 rmmod nvme_tcp 00:15:14.279 rmmod nvme_fabrics 00:15:14.279 rmmod nvme_keyring 00:15:14.279 08:28:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:14.279 08:28:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:15:14.279 08:28:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:15:14.279 08:28:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 74545 ']' 00:15:14.279 08:28:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 74545 00:15:14.279 08:28:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 74545 ']' 00:15:14.279 08:28:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 74545 00:15:14.279 08:28:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:15:14.279 08:28:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:14.279 08:28:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74545 00:15:14.279 killing process with pid 74545 00:15:14.279 08:28:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:14.279 08:28:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:14.279 08:28:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74545' 00:15:14.279 08:28:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 74545 00:15:14.279 [2024-07-15 08:28:06.303742] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:14.279 08:28:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 74545 00:15:14.537 08:28:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:14.537 08:28:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:14.537 08:28:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:14.537 08:28:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:14.537 08:28:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:14.537 08:28:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:14.537 08:28:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:14.537 08:28:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:14.537 08:28:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:14.537 08:28:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:15:14.537 00:15:14.537 real 0m14.173s 00:15:14.537 user 0m19.657s 00:15:14.537 sys 0m5.529s 00:15:14.537 08:28:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:14.537 08:28:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:14.537 ************************************ 00:15:14.537 END TEST nvmf_fips 00:15:14.537 ************************************ 00:15:14.537 08:28:06 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:14.537 08:28:06 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:15:14.538 08:28:06 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ virt == phy ]] 00:15:14.538 08:28:06 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:15:14.538 08:28:06 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:14.538 08:28:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:14.538 08:28:06 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:15:14.538 08:28:06 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:14.538 08:28:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:14.538 08:28:06 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 1 -eq 0 ]] 00:15:14.538 08:28:06 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:15:14.538 08:28:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:14.538 08:28:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:14.538 08:28:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:14.538 ************************************ 00:15:14.538 START TEST nvmf_identify 00:15:14.538 ************************************ 00:15:14.538 08:28:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:15:14.797 * Looking for test storage... 00:15:14.797 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:14.797 08:28:06 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:14.797 08:28:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:15:14.797 08:28:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:14.797 08:28:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:14.797 08:28:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:14.797 08:28:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:14.797 08:28:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:14.797 08:28:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:14.797 08:28:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:14.797 08:28:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:14.797 08:28:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:14.797 08:28:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:14.797 08:28:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:15:14.797 08:28:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:15:14.798 08:28:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:14.798 08:28:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:14.798 08:28:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:14.798 08:28:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:14.798 08:28:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:14.798 08:28:06 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:14.798 08:28:06 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:14.798 08:28:06 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:14.798 08:28:06 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:14.798 08:28:06 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:14.798 08:28:06 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:14.798 08:28:06 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:15:14.798 08:28:06 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:14.798 08:28:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:15:14.798 08:28:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:14.798 08:28:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:14.798 08:28:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:14.798 08:28:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:14.798 08:28:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:14.798 08:28:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:14.798 08:28:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:14.798 08:28:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:14.798 08:28:06 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:14.798 08:28:06 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:14.798 08:28:06 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:15:14.798 08:28:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:14.798 08:28:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:14.798 08:28:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:14.798 08:28:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:14.798 08:28:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:14.798 08:28:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:14.798 08:28:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:14.798 08:28:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:14.798 08:28:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:14.798 08:28:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:14.798 08:28:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:14.798 08:28:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:14.798 08:28:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:14.798 08:28:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:14.798 08:28:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:14.798 08:28:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:14.798 08:28:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:14.798 08:28:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:14.798 08:28:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:14.798 08:28:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:14.798 08:28:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:14.798 08:28:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:14.798 08:28:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:14.798 08:28:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:14.798 08:28:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:14.798 08:28:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:14.798 08:28:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:14.798 08:28:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:14.798 Cannot find device "nvmf_tgt_br" 00:15:14.798 08:28:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # true 00:15:14.798 08:28:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:14.798 Cannot find device "nvmf_tgt_br2" 00:15:14.798 08:28:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # true 00:15:14.798 08:28:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:14.798 08:28:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:14.798 Cannot find device "nvmf_tgt_br" 00:15:14.798 08:28:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # true 00:15:14.798 08:28:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:14.798 Cannot find device "nvmf_tgt_br2" 00:15:14.798 08:28:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # true 00:15:14.798 08:28:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:14.798 08:28:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:14.798 08:28:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:14.798 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:14.798 08:28:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # true 00:15:14.798 08:28:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:14.798 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:14.798 08:28:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # true 00:15:14.798 08:28:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:14.798 08:28:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:14.798 08:28:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:14.798 08:28:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:14.798 08:28:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:14.798 08:28:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:14.798 08:28:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:14.798 08:28:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:14.798 08:28:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:14.798 08:28:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:14.798 08:28:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:15.057 08:28:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:15.057 08:28:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:15.057 08:28:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:15.057 08:28:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:15.057 08:28:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:15.057 08:28:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:15.057 08:28:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:15.057 08:28:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:15.057 08:28:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:15.057 08:28:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:15.057 08:28:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:15.057 08:28:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:15.057 08:28:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:15.057 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:15.057 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:15:15.057 00:15:15.057 --- 10.0.0.2 ping statistics --- 00:15:15.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:15.057 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:15:15.057 08:28:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:15.057 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:15.057 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:15:15.057 00:15:15.057 --- 10.0.0.3 ping statistics --- 00:15:15.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:15.057 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:15:15.057 08:28:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:15.057 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:15.057 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:15:15.057 00:15:15.057 --- 10.0.0.1 ping statistics --- 00:15:15.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:15.057 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:15:15.057 08:28:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:15.057 08:28:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@433 -- # return 0 00:15:15.057 08:28:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:15.057 08:28:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:15.057 08:28:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:15.057 08:28:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:15.057 08:28:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:15.057 08:28:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:15.057 08:28:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:15.057 08:28:07 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:15:15.057 08:28:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:15.057 08:28:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:15.057 08:28:07 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=74926 00:15:15.057 08:28:07 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:15.057 08:28:07 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:15.057 08:28:07 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 74926 00:15:15.057 08:28:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 74926 ']' 00:15:15.057 08:28:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:15.057 08:28:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:15.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:15.057 08:28:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:15.057 08:28:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:15.057 08:28:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:15.057 [2024-07-15 08:28:07.202278] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:15:15.057 [2024-07-15 08:28:07.202403] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:15.314 [2024-07-15 08:28:07.346621] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:15.314 [2024-07-15 08:28:07.461700] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:15.314 [2024-07-15 08:28:07.461959] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:15.314 [2024-07-15 08:28:07.462041] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:15.314 [2024-07-15 08:28:07.462119] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:15.314 [2024-07-15 08:28:07.462207] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:15.314 [2024-07-15 08:28:07.462367] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:15.314 [2024-07-15 08:28:07.462419] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:15.314 [2024-07-15 08:28:07.462798] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:15.314 [2024-07-15 08:28:07.462801] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:15.571 [2024-07-15 08:28:07.515775] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:16.137 08:28:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:16.137 08:28:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:15:16.137 08:28:08 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:16.137 08:28:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.137 08:28:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:16.137 [2024-07-15 08:28:08.148109] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:16.137 08:28:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.137 08:28:08 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:15:16.137 08:28:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:16.137 08:28:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:16.137 08:28:08 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:16.137 08:28:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.137 08:28:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:16.137 Malloc0 00:15:16.137 08:28:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.137 08:28:08 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:16.137 08:28:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.137 08:28:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:16.137 08:28:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.137 08:28:08 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:15:16.137 08:28:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.137 08:28:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:16.137 08:28:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.137 08:28:08 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:16.137 08:28:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.137 08:28:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:16.137 [2024-07-15 08:28:08.244768] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:16.137 08:28:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.137 08:28:08 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:16.137 08:28:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.137 08:28:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:16.137 08:28:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.137 08:28:08 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:15:16.137 08:28:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.137 08:28:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:16.137 [ 00:15:16.137 { 00:15:16.138 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:16.138 "subtype": "Discovery", 00:15:16.138 "listen_addresses": [ 00:15:16.138 { 00:15:16.138 "trtype": "TCP", 00:15:16.138 "adrfam": "IPv4", 00:15:16.138 "traddr": "10.0.0.2", 00:15:16.138 "trsvcid": "4420" 00:15:16.138 } 00:15:16.138 ], 00:15:16.138 "allow_any_host": true, 00:15:16.138 "hosts": [] 00:15:16.138 }, 00:15:16.138 { 00:15:16.138 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:16.138 "subtype": "NVMe", 00:15:16.138 "listen_addresses": [ 00:15:16.138 { 00:15:16.138 "trtype": "TCP", 00:15:16.138 "adrfam": "IPv4", 00:15:16.138 "traddr": "10.0.0.2", 00:15:16.138 "trsvcid": "4420" 00:15:16.138 } 00:15:16.138 ], 00:15:16.138 "allow_any_host": true, 00:15:16.138 "hosts": [], 00:15:16.138 "serial_number": "SPDK00000000000001", 00:15:16.138 "model_number": "SPDK bdev Controller", 00:15:16.138 "max_namespaces": 32, 00:15:16.138 "min_cntlid": 1, 00:15:16.138 "max_cntlid": 65519, 00:15:16.138 "namespaces": [ 00:15:16.138 { 00:15:16.138 "nsid": 1, 00:15:16.138 "bdev_name": "Malloc0", 00:15:16.138 "name": "Malloc0", 00:15:16.138 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:15:16.138 "eui64": "ABCDEF0123456789", 00:15:16.138 "uuid": "54d22671-d00e-4e1f-a490-7212bda64ea7" 00:15:16.138 } 00:15:16.138 ] 00:15:16.138 } 00:15:16.138 ] 00:15:16.138 08:28:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.138 08:28:08 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:15:16.138 [2024-07-15 08:28:08.293458] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:15:16.138 [2024-07-15 08:28:08.293526] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74961 ] 00:15:16.398 [2024-07-15 08:28:08.437010] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:15:16.398 [2024-07-15 08:28:08.440802] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:15:16.398 [2024-07-15 08:28:08.440827] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:15:16.398 [2024-07-15 08:28:08.440844] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:15:16.398 [2024-07-15 08:28:08.440852] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:15:16.398 [2024-07-15 08:28:08.441033] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:15:16.398 [2024-07-15 08:28:08.441089] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x246e2c0 0 00:15:16.398 [2024-07-15 08:28:08.448764] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:15:16.398 [2024-07-15 08:28:08.448792] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:15:16.398 [2024-07-15 08:28:08.448798] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:15:16.398 [2024-07-15 08:28:08.448802] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:15:16.398 [2024-07-15 08:28:08.448856] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.398 [2024-07-15 08:28:08.448864] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.398 [2024-07-15 08:28:08.448868] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x246e2c0) 00:15:16.398 [2024-07-15 08:28:08.448886] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:15:16.398 [2024-07-15 08:28:08.448921] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24af940, cid 0, qid 0 00:15:16.398 [2024-07-15 08:28:08.456748] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.398 [2024-07-15 08:28:08.456895] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.398 [2024-07-15 08:28:08.456995] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.398 [2024-07-15 08:28:08.457035] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24af940) on tqpair=0x246e2c0 00:15:16.398 [2024-07-15 08:28:08.457132] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:15:16.398 [2024-07-15 08:28:08.457178] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:15:16.398 [2024-07-15 08:28:08.457335] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:15:16.398 [2024-07-15 08:28:08.457503] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.398 [2024-07-15 08:28:08.457590] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.398 [2024-07-15 08:28:08.457629] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x246e2c0) 00:15:16.398 [2024-07-15 08:28:08.457771] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.398 [2024-07-15 08:28:08.457921] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24af940, cid 0, qid 0 00:15:16.398 [2024-07-15 08:28:08.458037] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.398 [2024-07-15 08:28:08.458079] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.398 [2024-07-15 08:28:08.458171] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.398 [2024-07-15 08:28:08.458210] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24af940) on tqpair=0x246e2c0 00:15:16.398 [2024-07-15 08:28:08.458334] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:15:16.398 [2024-07-15 08:28:08.458429] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:15:16.398 [2024-07-15 08:28:08.458487] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.398 [2024-07-15 08:28:08.458515] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.398 [2024-07-15 08:28:08.458539] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x246e2c0) 00:15:16.398 [2024-07-15 08:28:08.458601] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.398 [2024-07-15 08:28:08.458669] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24af940, cid 0, qid 0 00:15:16.398 [2024-07-15 08:28:08.458777] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.398 [2024-07-15 08:28:08.458821] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.398 [2024-07-15 08:28:08.458929] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.398 [2024-07-15 08:28:08.458970] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24af940) on tqpair=0x246e2c0 00:15:16.398 [2024-07-15 08:28:08.459083] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:15:16.398 [2024-07-15 08:28:08.459205] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:15:16.398 [2024-07-15 08:28:08.459316] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.398 [2024-07-15 08:28:08.459355] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.398 [2024-07-15 08:28:08.459381] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x246e2c0) 00:15:16.398 [2024-07-15 08:28:08.459445] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.398 [2024-07-15 08:28:08.459571] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24af940, cid 0, qid 0 00:15:16.399 [2024-07-15 08:28:08.459689] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.399 [2024-07-15 08:28:08.459811] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.399 [2024-07-15 08:28:08.459907] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.399 [2024-07-15 08:28:08.459946] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24af940) on tqpair=0x246e2c0 00:15:16.399 [2024-07-15 08:28:08.460023] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting st===================================================== 00:15:16.399 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:15:16.399 ===================================================== 00:15:16.399 Controller Capabilities/Features 00:15:16.399 ================================ 00:15:16.399 Vendor ID: 0000 00:15:16.399 Subsystem Vendor ID: 0000 00:15:16.399 Serial Number: .................... 00:15:16.399 Model Number: ........................................ 00:15:16.399 Firmware Version: 24.09 00:15:16.399 Recommended Arb Burst: 0 00:15:16.399 IEEE OUI Identifier: 00 00 00 00:15:16.399 Multi-path I/O 00:15:16.399 May have multiple subsystem ports: No 00:15:16.399 May have multiple controllers: No 00:15:16.399 Associated with SR-IOV VF: No 00:15:16.399 Max Data Transfer Size: 131072 00:15:16.399 Max Number of Namespaces: 0 00:15:16.399 Max Number of I/O Queues: 1024 00:15:16.399 NVMe Specification Version (VS): 1.3 00:15:16.399 NVMe Specification Version (Identify): 1.3 00:15:16.399 Maximum Queue Entries: 128 00:15:16.399 Contiguous Queues Required: Yes 00:15:16.399 Arbitration Mechanisms Supported 00:15:16.399 Weighted Round Robin: Not Supported 00:15:16.399 Vendor Specific: Not Supported 00:15:16.399 Reset Timeout: 15000 ms 00:15:16.399 Doorbell Stride: 4 bytes 00:15:16.399 NVM Subsystem Reset: Not Supported 00:15:16.399 Command Sets Supported 00:15:16.399 NVM Command Set: Supported 00:15:16.399 Boot Partition: Not Supported 00:15:16.399 Memory Page Size Minimum: 4096 bytes 00:15:16.399 Memory Page Size Maximum: 4096 bytes 00:15:16.399 Persistent Memory Region: Not Supported 00:15:16.399 Optional Asynchronous Events Supported 00:15:16.399 Namespace Attribute Notices: Not Supported 00:15:16.399 Firmware Activation Notices: Not Supported 00:15:16.399 ANA Change Notices: Not Supported 00:15:16.399 PLE Aggregate Log Change Notices: Not Supported 00:15:16.399 LBA Status Info Alert Notices: Not Supported 00:15:16.399 EGE Aggregate Log Change Notices: Not Supported 00:15:16.399 Normal NVM Subsystem Shutdown event: Not Supported 00:15:16.399 Zone Descriptor Change Notices: Not Supported 00:15:16.399 Discovery Log Change Notices: Supported 00:15:16.399 Controller Attributes 00:15:16.399 128-bit Host Identifier: Not Supported 00:15:16.399 Non-Operational Permissive Mode: Not Supported 00:15:16.399 NVM Sets: Not Supported 00:15:16.399 Read Recovery Levels: Not Supported 00:15:16.399 Endurance Groups: Not Supported 00:15:16.399 Predictable Latency Mode: Not Supported 00:15:16.399 Traffic Based Keep ALive: Not Supported 00:15:16.399 Namespace Granularity: Not Supported 00:15:16.399 SQ Associations: Not Supported 00:15:16.399 UUID List: Not Supported 00:15:16.399 Multi-Domain Subsystem: Not Supported 00:15:16.399 Fixed Capacity Management: Not Supported 00:15:16.399 Variable Capacity Management: Not Supported 00:15:16.399 Delete Endurance Group: Not Supported 00:15:16.399 Delete NVM Set: Not Supported 00:15:16.399 Extended LBA Formats Supported: Not Supported 00:15:16.399 Flexible Data Placement Supported: Not Supported 00:15:16.399 00:15:16.399 Controller Memory Buffer Support 00:15:16.399 ================================ 00:15:16.399 Supported: No 00:15:16.399 00:15:16.399 Persistent Memory Region Support 00:15:16.399 ================================ 00:15:16.399 Supported: No 00:15:16.399 00:15:16.399 Admin Command Set Attributes 00:15:16.399 ============================ 00:15:16.399 Security Send/Receive: Not Supported 00:15:16.399 Format NVM: Not Supported 00:15:16.399 Firmware Activate/Download: Not Supported 00:15:16.399 Namespace Management: Not Supported 00:15:16.399 Device Self-Test: Not Supported 00:15:16.399 Directives: Not Supported 00:15:16.399 NVMe-MI: Not Supported 00:15:16.399 Virtualization Management: Not Supported 00:15:16.399 Doorbell Buffer Config: Not Supported 00:15:16.399 Get LBA Status Capability: Not Supported 00:15:16.399 Command & Feature Lockdown Capability: Not Supported 00:15:16.399 Abort Command Limit: 1 00:15:16.399 Async Event Request Limit: 4 00:15:16.399 Number of Firmware Slots: N/A 00:15:16.399 Firmware Slot 1 Read-Only: N/A 00:15:16.399 Firmware Activation Without Reset: N/A 00:15:16.399 Multiple Update Detection Support: N/A 00:15:16.399 Firmware Update Granularity: No Information Provided 00:15:16.399 Per-Namespace SMART Log: No 00:15:16.399 Asymmetric Namespace Access Log Page: Not Supported 00:15:16.399 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:15:16.399 Command Effects Log Page: Not Supported 00:15:16.399 Get Log Page Extended Data: Supported 00:15:16.399 Telemetry Log Pages: Not Supported 00:15:16.399 Persistent Event Log Pages: Not Supported 00:15:16.399 Supported Log Pages Log Page: May Support 00:15:16.399 Commands Supported & Effects Log Page: Not Supported 00:15:16.399 Feature Identifiers & Effects Log Page:May Support 00:15:16.399 NVMe-MI Commands & Effects Log Page: May Support 00:15:16.399 Data Area 4 for Telemetry Log: Not Supported 00:15:16.399 Error Log Page Entries Supported: 128 00:15:16.399 Keep Alive: Not Supported 00:15:16.399 00:15:16.399 NVM Command Set Attributes 00:15:16.399 ========================== 00:15:16.399 Submission Queue Entry Size 00:15:16.399 Max: 1 00:15:16.399 Min: 1 00:15:16.399 Completion Queue Entry Size 00:15:16.399 Max: 1 00:15:16.399 Min: 1 00:15:16.399 Number of Namespaces: 0 00:15:16.399 Compare Command: Not Supported 00:15:16.399 Write Uncorrectable Command: Not Supported 00:15:16.399 Dataset Management Command: Not Supported 00:15:16.399 Write Zeroes Command: Not Supported 00:15:16.399 Set Features Save Field: Not Supported 00:15:16.399 Reservations: Not Supported 00:15:16.399 Timestamp: Not Supported 00:15:16.399 Copy: Not Supported 00:15:16.399 Volatile Write Cache: Not Present 00:15:16.399 Atomic Write Unit (Normal): 1 00:15:16.399 Atomic Write Unit (PFail): 1 00:15:16.399 Atomic Compare & Write Unit: 1 00:15:16.399 Fused Compare & Write: Supported 00:15:16.399 Scatter-Gather List 00:15:16.399 SGL Command Set: Supported 00:15:16.399 SGL Keyed: Supported 00:15:16.399 SGL Bit Bucket Descriptor: Not Supported 00:15:16.399 SGL Metadata Pointer: Not Supported 00:15:16.399 Oversized SGL: Not Supported 00:15:16.399 SGL Metadata Address: Not Supported 00:15:16.399 SGL Offset: Supported 00:15:16.399 Transport SGL Data Block: Not Supported 00:15:16.399 Replay Protected Memory Block: Not Supported 00:15:16.399 00:15:16.399 Firmware Slot Information 00:15:16.399 ========================= 00:15:16.399 Active slot: 0 00:15:16.399 00:15:16.399 00:15:16.399 Error Log 00:15:16.399 ========= 00:15:16.399 00:15:16.399 Active Namespaces 00:15:16.399 ================= 00:15:16.399 Discovery Log Page 00:15:16.399 ================== 00:15:16.399 Generation Counter: 2 00:15:16.399 Number of Records: 2 00:15:16.399 Record Format: 0 00:15:16.399 00:15:16.399 Discovery Log Entry 0 00:15:16.399 ---------------------- 00:15:16.399 Transport Type: 3 (TCP) 00:15:16.399 Address Family: 1 (IPv4) 00:15:16.400 Subsystem Type: 3 (Current Discovery Subsystem) 00:15:16.400 Entry Flags: 00:15:16.400 Duplicate Returned Information: 1 00:15:16.400 Explicit Persistent Connection Support for Discovery: 1 00:15:16.400 Transport Requirements: 00:15:16.400 Secure Channel: Not Required 00:15:16.400 Port ID: 0 (0x0000) 00:15:16.400 Controller ID: 65535 (0xffff) 00:15:16.400 Admin Max SQ Size: 128 00:15:16.400 Transport Service Identifier: 4420 00:15:16.400 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:15:16.400 Transport Address: 10.0.0.2 00:15:16.400 Discovery Log Entry 1 00:15:16.400 ---------------------- 00:15:16.400 Transport Type: 3 (TCP) 00:15:16.400 Address Family: 1 (IPv4) 00:15:16.400 Subsystem Type: 2 (NVM Subsystem) 00:15:16.400 Entry Flags: 00:15:16.400 Duplicate Returned Information: 0 00:15:16.400 Explicit Persistent Connection Support for Discovery: 0 00:15:16.400 Transport Requirements: 00:15:16.400 Secure Channel: Not Required 00:15:16.400 Port ID: 0 (0x0000) 00:15:16.400 Controller ID: 65535 (0xffff) 00:15:16.400 Admin Max SQ Size: 128 00:15:16.400 Transport Service Identifier: 4420 00:15:16.400 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:15:16.400 Transport Address: 10.0.0.2 ate to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:16.400 [2024-07-15 08:28:08.460121] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.400 [2024-07-15 08:28:08.460132] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.400 [2024-07-15 08:28:08.460136] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x246e2c0) 00:15:16.400 [2024-07-15 08:28:08.460145] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.400 [2024-07-15 08:28:08.460174] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24af940, cid 0, qid 0 00:15:16.400 [2024-07-15 08:28:08.460225] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.400 [2024-07-15 08:28:08.460233] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.400 [2024-07-15 08:28:08.460237] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.400 [2024-07-15 08:28:08.460241] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24af940) on tqpair=0x246e2c0 00:15:16.400 [2024-07-15 08:28:08.460247] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:15:16.400 [2024-07-15 08:28:08.460253] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:15:16.400 [2024-07-15 08:28:08.460261] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:16.400 [2024-07-15 08:28:08.460368] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:15:16.400 [2024-07-15 08:28:08.460374] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:16.400 [2024-07-15 08:28:08.460384] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.400 [2024-07-15 08:28:08.460389] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.400 [2024-07-15 08:28:08.460393] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x246e2c0) 00:15:16.400 [2024-07-15 08:28:08.460401] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.400 [2024-07-15 08:28:08.460422] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24af940, cid 0, qid 0 00:15:16.400 [2024-07-15 08:28:08.460479] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.400 [2024-07-15 08:28:08.460486] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.400 [2024-07-15 08:28:08.460490] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.400 [2024-07-15 08:28:08.460495] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24af940) on tqpair=0x246e2c0 00:15:16.400 [2024-07-15 08:28:08.460501] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:16.400 [2024-07-15 08:28:08.460511] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.400 [2024-07-15 08:28:08.460516] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.400 [2024-07-15 08:28:08.460520] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x246e2c0) 00:15:16.400 [2024-07-15 08:28:08.460528] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.400 [2024-07-15 08:28:08.460546] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24af940, cid 0, qid 0 00:15:16.400 [2024-07-15 08:28:08.460597] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.400 [2024-07-15 08:28:08.460604] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.400 [2024-07-15 08:28:08.460608] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.400 [2024-07-15 08:28:08.460612] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24af940) on tqpair=0x246e2c0 00:15:16.400 [2024-07-15 08:28:08.460619] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:16.400 [2024-07-15 08:28:08.460625] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:15:16.400 [2024-07-15 08:28:08.460634] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:15:16.400 [2024-07-15 08:28:08.460646] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:15:16.400 [2024-07-15 08:28:08.460660] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.400 [2024-07-15 08:28:08.460665] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x246e2c0) 00:15:16.400 [2024-07-15 08:28:08.460674] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.400 [2024-07-15 08:28:08.460694] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24af940, cid 0, qid 0 00:15:16.400 [2024-07-15 08:28:08.460803] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:16.400 [2024-07-15 08:28:08.460812] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:16.400 [2024-07-15 08:28:08.460817] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:16.400 [2024-07-15 08:28:08.460821] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x246e2c0): datao=0, datal=4096, cccid=0 00:15:16.400 [2024-07-15 08:28:08.460826] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24af940) on tqpair(0x246e2c0): expected_datao=0, payload_size=4096 00:15:16.400 [2024-07-15 08:28:08.460832] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.400 [2024-07-15 08:28:08.460841] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:16.400 [2024-07-15 08:28:08.460846] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:16.400 [2024-07-15 08:28:08.460855] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.400 [2024-07-15 08:28:08.460862] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.400 [2024-07-15 08:28:08.460865] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.400 [2024-07-15 08:28:08.460870] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24af940) on tqpair=0x246e2c0 00:15:16.400 [2024-07-15 08:28:08.460881] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:15:16.400 [2024-07-15 08:28:08.460887] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:15:16.400 [2024-07-15 08:28:08.460892] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:15:16.400 [2024-07-15 08:28:08.460898] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:15:16.400 [2024-07-15 08:28:08.460903] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:15:16.400 [2024-07-15 08:28:08.460908] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:15:16.400 [2024-07-15 08:28:08.460918] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:15:16.400 [2024-07-15 08:28:08.460926] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.400 [2024-07-15 08:28:08.460930] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.400 [2024-07-15 08:28:08.460934] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x246e2c0) 00:15:16.400 [2024-07-15 08:28:08.460943] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:16.400 [2024-07-15 08:28:08.460966] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24af940, cid 0, qid 0 00:15:16.400 [2024-07-15 08:28:08.461023] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.400 [2024-07-15 08:28:08.461030] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.400 [2024-07-15 08:28:08.461034] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.400 [2024-07-15 08:28:08.461038] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24af940) on tqpair=0x246e2c0 00:15:16.401 [2024-07-15 08:28:08.461047] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.401 [2024-07-15 08:28:08.461052] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.401 [2024-07-15 08:28:08.461056] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x246e2c0) 00:15:16.401 [2024-07-15 08:28:08.461062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:16.401 [2024-07-15 08:28:08.461070] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.401 [2024-07-15 08:28:08.461074] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.401 [2024-07-15 08:28:08.461078] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x246e2c0) 00:15:16.401 [2024-07-15 08:28:08.461084] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:16.401 [2024-07-15 08:28:08.461091] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.401 [2024-07-15 08:28:08.461095] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.401 [2024-07-15 08:28:08.461099] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x246e2c0) 00:15:16.401 [2024-07-15 08:28:08.461105] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:16.401 [2024-07-15 08:28:08.461112] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.401 [2024-07-15 08:28:08.461116] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.401 [2024-07-15 08:28:08.461119] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x246e2c0) 00:15:16.401 [2024-07-15 08:28:08.461125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:16.401 [2024-07-15 08:28:08.461131] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:15:16.401 [2024-07-15 08:28:08.461145] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:16.401 [2024-07-15 08:28:08.461152] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.401 [2024-07-15 08:28:08.461157] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x246e2c0) 00:15:16.401 [2024-07-15 08:28:08.461164] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.401 [2024-07-15 08:28:08.461185] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24af940, cid 0, qid 0 00:15:16.401 [2024-07-15 08:28:08.461193] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24afac0, cid 1, qid 0 00:15:16.401 [2024-07-15 08:28:08.461199] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24afc40, cid 2, qid 0 00:15:16.401 [2024-07-15 08:28:08.461204] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24afdc0, cid 3, qid 0 00:15:16.401 [2024-07-15 08:28:08.461209] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24aff40, cid 4, qid 0 00:15:16.401 [2024-07-15 08:28:08.461296] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.401 [2024-07-15 08:28:08.461302] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.401 [2024-07-15 08:28:08.461307] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.401 [2024-07-15 08:28:08.461312] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24aff40) on tqpair=0x246e2c0 00:15:16.401 [2024-07-15 08:28:08.461318] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:15:16.401 [2024-07-15 08:28:08.461327] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:15:16.401 [2024-07-15 08:28:08.461341] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.401 [2024-07-15 08:28:08.461346] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x246e2c0) 00:15:16.401 [2024-07-15 08:28:08.461353] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.401 [2024-07-15 08:28:08.461373] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24aff40, cid 4, qid 0 00:15:16.401 [2024-07-15 08:28:08.461434] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:16.401 [2024-07-15 08:28:08.461441] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:16.401 [2024-07-15 08:28:08.461445] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:16.401 [2024-07-15 08:28:08.461449] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x246e2c0): datao=0, datal=4096, cccid=4 00:15:16.401 [2024-07-15 08:28:08.461454] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24aff40) on tqpair(0x246e2c0): expected_datao=0, payload_size=4096 00:15:16.401 [2024-07-15 08:28:08.461459] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.401 [2024-07-15 08:28:08.461466] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:16.401 [2024-07-15 08:28:08.461471] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:16.401 [2024-07-15 08:28:08.461479] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.401 [2024-07-15 08:28:08.461486] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.401 [2024-07-15 08:28:08.461489] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.401 [2024-07-15 08:28:08.461494] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24aff40) on tqpair=0x246e2c0 00:15:16.401 [2024-07-15 08:28:08.461509] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:15:16.401 [2024-07-15 08:28:08.461543] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.401 [2024-07-15 08:28:08.461549] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x246e2c0) 00:15:16.401 [2024-07-15 08:28:08.461557] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.401 [2024-07-15 08:28:08.461565] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.401 [2024-07-15 08:28:08.461569] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.401 [2024-07-15 08:28:08.461573] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x246e2c0) 00:15:16.401 [2024-07-15 08:28:08.461580] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:15:16.401 [2024-07-15 08:28:08.461604] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24aff40, cid 4, qid 0 00:15:16.401 [2024-07-15 08:28:08.461613] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24b00c0, cid 5, qid 0 00:15:16.401 [2024-07-15 08:28:08.461741] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:16.401 [2024-07-15 08:28:08.461750] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:16.401 [2024-07-15 08:28:08.461754] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:16.401 [2024-07-15 08:28:08.461758] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x246e2c0): datao=0, datal=1024, cccid=4 00:15:16.401 [2024-07-15 08:28:08.461764] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24aff40) on tqpair(0x246e2c0): expected_datao=0, payload_size=1024 00:15:16.401 [2024-07-15 08:28:08.461769] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.401 [2024-07-15 08:28:08.461777] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:16.401 [2024-07-15 08:28:08.461781] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:16.401 [2024-07-15 08:28:08.461787] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.401 [2024-07-15 08:28:08.461793] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.401 [2024-07-15 08:28:08.461797] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.401 [2024-07-15 08:28:08.461801] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24b00c0) on tqpair=0x246e2c0 00:15:16.401 [2024-07-15 08:28:08.461831] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.401 [2024-07-15 08:28:08.461839] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.401 [2024-07-15 08:28:08.461843] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.401 [2024-07-15 08:28:08.461847] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24aff40) on tqpair=0x246e2c0 00:15:16.401 [2024-07-15 08:28:08.461861] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.401 [2024-07-15 08:28:08.461866] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x246e2c0) 00:15:16.401 [2024-07-15 08:28:08.461874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.401 [2024-07-15 08:28:08.461900] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24aff40, cid 4, qid 0 00:15:16.401 [2024-07-15 08:28:08.461973] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:16.401 [2024-07-15 08:28:08.461980] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:16.401 [2024-07-15 08:28:08.461984] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:16.401 [2024-07-15 08:28:08.461988] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x246e2c0): datao=0, datal=3072, cccid=4 00:15:16.401 [2024-07-15 08:28:08.461993] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24aff40) on tqpair(0x246e2c0): expected_datao=0, payload_size=3072 00:15:16.401 [2024-07-15 08:28:08.461998] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.401 [2024-07-15 08:28:08.462005] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:16.401 [2024-07-15 08:28:08.462009] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:16.401 [2024-07-15 08:28:08.462018] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.401 [2024-07-15 08:28:08.462024] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.401 [2024-07-15 08:28:08.462028] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.401 [2024-07-15 08:28:08.462032] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24aff40) on tqpair=0x246e2c0 00:15:16.401 [2024-07-15 08:28:08.462042] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.402 [2024-07-15 08:28:08.462047] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x246e2c0) 00:15:16.402 [2024-07-15 08:28:08.462055] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.402 [2024-07-15 08:28:08.462079] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24aff40, cid 4, qid 0 00:15:16.402 [2024-07-15 08:28:08.462144] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:16.402 [2024-07-15 08:28:08.462151] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:16.402 [2024-07-15 08:28:08.462155] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:16.402 [2024-07-15 08:28:08.462159] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x246e2c0): datao=0, datal=8, cccid=4 00:15:16.402 [2024-07-15 08:28:08.462165] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24aff40) on tqpair(0x246e2c0): expected_datao=0, payload_size=8 00:15:16.402 [2024-07-15 08:28:08.462170] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.402 [2024-07-15 08:28:08.462186] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:16.402 [2024-07-15 08:28:08.462190] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:16.402 [2024-07-15 08:28:08.462206] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.402 [2024-07-15 08:28:08.462214] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.402 [2024-07-15 08:28:08.462218] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.402 [2024-07-15 08:28:08.462223] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24aff40) on tqpair=0x246e2c0 00:15:16.402 [2024-07-15 08:28:08.462355] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:15:16.402 [2024-07-15 08:28:08.462373] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24af940) on tqpair=0x246e2c0 00:15:16.402 [2024-07-15 08:28:08.462382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.402 [2024-07-15 08:28:08.462388] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24afac0) on tqpair=0x246e2c0 00:15:16.402 [2024-07-15 08:28:08.462393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.402 [2024-07-15 08:28:08.462399] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24afc40) on tqpair=0x246e2c0 00:15:16.402 [2024-07-15 08:28:08.462404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.402 [2024-07-15 08:28:08.462409] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24afdc0) on tqpair=0x246e2c0 00:15:16.402 [2024-07-15 08:28:08.462414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.402 [2024-07-15 08:28:08.462425] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.402 [2024-07-15 08:28:08.462430] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.402 [2024-07-15 08:28:08.462434] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x246e2c0) 00:15:16.402 [2024-07-15 08:28:08.462442] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.402 [2024-07-15 08:28:08.462469] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24afdc0, cid 3, qid 0 00:15:16.402 [2024-07-15 08:28:08.462533] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.402 [2024-07-15 08:28:08.462541] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.402 [2024-07-15 08:28:08.462545] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.402 [2024-07-15 08:28:08.462549] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24afdc0) on tqpair=0x246e2c0 00:15:16.402 [2024-07-15 08:28:08.462558] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.402 [2024-07-15 08:28:08.462563] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.402 [2024-07-15 08:28:08.462567] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x246e2c0) 00:15:16.402 [2024-07-15 08:28:08.462574] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.402 [2024-07-15 08:28:08.462597] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24afdc0, cid 3, qid 0 00:15:16.402 [2024-07-15 08:28:08.462667] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.402 [2024-07-15 08:28:08.462674] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.402 [2024-07-15 08:28:08.462679] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.402 [2024-07-15 08:28:08.462684] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24afdc0) on tqpair=0x246e2c0 00:15:16.402 [2024-07-15 08:28:08.462690] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:15:16.402 [2024-07-15 08:28:08.462695] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:15:16.402 [2024-07-15 08:28:08.462706] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.402 [2024-07-15 08:28:08.462711] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.402 [2024-07-15 08:28:08.466731] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x246e2c0) 00:15:16.402 [2024-07-15 08:28:08.466758] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.402 [2024-07-15 08:28:08.466805] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24afdc0, cid 3, qid 0 00:15:16.402 [2024-07-15 08:28:08.466862] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.402 [2024-07-15 08:28:08.466870] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.402 [2024-07-15 08:28:08.466874] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.402 [2024-07-15 08:28:08.466879] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24afdc0) on tqpair=0x246e2c0 00:15:16.402 [2024-07-15 08:28:08.466889] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 4 milliseconds 00:15:16.402 00:15:16.402 08:28:08 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:15:16.402 [2024-07-15 08:28:08.511302] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:15:16.402 [2024-07-15 08:28:08.511385] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74963 ] 00:15:16.666 [2024-07-15 08:28:08.656961] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:15:16.666 [2024-07-15 08:28:08.657043] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:15:16.666 [2024-07-15 08:28:08.657050] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:15:16.666 [2024-07-15 08:28:08.657066] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:15:16.666 [2024-07-15 08:28:08.657074] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:15:16.666 [2024-07-15 08:28:08.657231] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:15:16.666 [2024-07-15 08:28:08.657285] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x4b72c0 0 00:15:16.666 [2024-07-15 08:28:08.669747] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:15:16.666 [2024-07-15 08:28:08.669784] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:15:16.666 [2024-07-15 08:28:08.669791] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:15:16.666 [2024-07-15 08:28:08.669795] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:15:16.666 [2024-07-15 08:28:08.669851] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.666 [2024-07-15 08:28:08.669860] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.666 [2024-07-15 08:28:08.669865] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4b72c0) 00:15:16.666 [2024-07-15 08:28:08.669882] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:15:16.666 [2024-07-15 08:28:08.669923] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x4f8940, cid 0, qid 0 00:15:16.666 [2024-07-15 08:28:08.677750] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.666 [2024-07-15 08:28:08.677791] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.666 [2024-07-15 08:28:08.677797] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.666 [2024-07-15 08:28:08.677804] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x4f8940) on tqpair=0x4b72c0 00:15:16.666 [2024-07-15 08:28:08.677819] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:15:16.666 [2024-07-15 08:28:08.677832] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:15:16.666 [2024-07-15 08:28:08.677842] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:15:16.666 [2024-07-15 08:28:08.677877] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.666 [2024-07-15 08:28:08.677884] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.666 [2024-07-15 08:28:08.677889] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4b72c0) 00:15:16.666 [2024-07-15 08:28:08.677903] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.666 [2024-07-15 08:28:08.677944] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x4f8940, cid 0, qid 0 00:15:16.666 [2024-07-15 08:28:08.678024] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.666 [2024-07-15 08:28:08.678032] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.666 [2024-07-15 08:28:08.678036] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.666 [2024-07-15 08:28:08.678041] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x4f8940) on tqpair=0x4b72c0 00:15:16.666 [2024-07-15 08:28:08.678047] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:15:16.666 [2024-07-15 08:28:08.678056] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:15:16.666 [2024-07-15 08:28:08.678065] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.666 [2024-07-15 08:28:08.678070] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.666 [2024-07-15 08:28:08.678074] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4b72c0) 00:15:16.666 [2024-07-15 08:28:08.678082] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.666 [2024-07-15 08:28:08.678102] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x4f8940, cid 0, qid 0 00:15:16.666 [2024-07-15 08:28:08.678152] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.666 [2024-07-15 08:28:08.678159] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.666 [2024-07-15 08:28:08.678163] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.666 [2024-07-15 08:28:08.678168] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x4f8940) on tqpair=0x4b72c0 00:15:16.666 [2024-07-15 08:28:08.678175] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:15:16.666 [2024-07-15 08:28:08.678185] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:15:16.666 [2024-07-15 08:28:08.678193] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.666 [2024-07-15 08:28:08.678197] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.666 [2024-07-15 08:28:08.678202] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4b72c0) 00:15:16.666 [2024-07-15 08:28:08.678210] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.666 [2024-07-15 08:28:08.678228] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x4f8940, cid 0, qid 0 00:15:16.666 [2024-07-15 08:28:08.678274] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.666 [2024-07-15 08:28:08.678282] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.666 [2024-07-15 08:28:08.678286] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.666 [2024-07-15 08:28:08.678290] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x4f8940) on tqpair=0x4b72c0 00:15:16.666 [2024-07-15 08:28:08.678297] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:16.666 [2024-07-15 08:28:08.678308] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.666 [2024-07-15 08:28:08.678313] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.666 [2024-07-15 08:28:08.678317] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4b72c0) 00:15:16.666 [2024-07-15 08:28:08.678325] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.666 [2024-07-15 08:28:08.678343] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x4f8940, cid 0, qid 0 00:15:16.666 [2024-07-15 08:28:08.678392] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.666 [2024-07-15 08:28:08.678399] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.666 [2024-07-15 08:28:08.678403] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.666 [2024-07-15 08:28:08.678408] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x4f8940) on tqpair=0x4b72c0 00:15:16.666 [2024-07-15 08:28:08.678413] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:15:16.666 [2024-07-15 08:28:08.678419] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:15:16.666 [2024-07-15 08:28:08.678428] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:16.666 [2024-07-15 08:28:08.678534] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:15:16.666 [2024-07-15 08:28:08.678548] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:16.666 [2024-07-15 08:28:08.678559] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.666 [2024-07-15 08:28:08.678564] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.666 [2024-07-15 08:28:08.678568] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4b72c0) 00:15:16.666 [2024-07-15 08:28:08.678576] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.666 [2024-07-15 08:28:08.678597] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x4f8940, cid 0, qid 0 00:15:16.666 [2024-07-15 08:28:08.678646] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.666 [2024-07-15 08:28:08.678654] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.666 [2024-07-15 08:28:08.678658] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.666 [2024-07-15 08:28:08.678663] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x4f8940) on tqpair=0x4b72c0 00:15:16.666 [2024-07-15 08:28:08.678669] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:16.666 [2024-07-15 08:28:08.678680] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.666 [2024-07-15 08:28:08.678685] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.666 [2024-07-15 08:28:08.678689] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4b72c0) 00:15:16.666 [2024-07-15 08:28:08.678697] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.666 [2024-07-15 08:28:08.678715] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x4f8940, cid 0, qid 0 00:15:16.666 [2024-07-15 08:28:08.678775] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.666 [2024-07-15 08:28:08.678783] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.666 [2024-07-15 08:28:08.678787] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.666 [2024-07-15 08:28:08.678792] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x4f8940) on tqpair=0x4b72c0 00:15:16.666 [2024-07-15 08:28:08.678797] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:16.666 [2024-07-15 08:28:08.678803] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:15:16.666 [2024-07-15 08:28:08.678812] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:15:16.666 [2024-07-15 08:28:08.678824] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:15:16.666 [2024-07-15 08:28:08.678838] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.666 [2024-07-15 08:28:08.678843] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4b72c0) 00:15:16.666 [2024-07-15 08:28:08.678851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.666 [2024-07-15 08:28:08.678872] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x4f8940, cid 0, qid 0 00:15:16.666 [2024-07-15 08:28:08.678976] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:16.666 [2024-07-15 08:28:08.678991] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:16.666 [2024-07-15 08:28:08.678996] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:16.666 [2024-07-15 08:28:08.679001] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4b72c0): datao=0, datal=4096, cccid=0 00:15:16.666 [2024-07-15 08:28:08.679007] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x4f8940) on tqpair(0x4b72c0): expected_datao=0, payload_size=4096 00:15:16.666 [2024-07-15 08:28:08.679012] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.666 [2024-07-15 08:28:08.679023] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:16.667 [2024-07-15 08:28:08.679028] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:16.667 [2024-07-15 08:28:08.679038] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.667 [2024-07-15 08:28:08.679044] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.667 [2024-07-15 08:28:08.679049] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.667 [2024-07-15 08:28:08.679065] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x4f8940) on tqpair=0x4b72c0 00:15:16.667 [2024-07-15 08:28:08.679076] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:15:16.667 [2024-07-15 08:28:08.679083] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:15:16.667 [2024-07-15 08:28:08.679088] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:15:16.667 [2024-07-15 08:28:08.679093] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:15:16.667 [2024-07-15 08:28:08.679099] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:15:16.667 [2024-07-15 08:28:08.679104] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:15:16.667 [2024-07-15 08:28:08.679116] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:15:16.667 [2024-07-15 08:28:08.679125] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.667 [2024-07-15 08:28:08.679130] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.667 [2024-07-15 08:28:08.679134] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4b72c0) 00:15:16.667 [2024-07-15 08:28:08.679143] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:16.667 [2024-07-15 08:28:08.679164] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x4f8940, cid 0, qid 0 00:15:16.667 [2024-07-15 08:28:08.679220] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.667 [2024-07-15 08:28:08.679227] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.667 [2024-07-15 08:28:08.679232] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.667 [2024-07-15 08:28:08.679237] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x4f8940) on tqpair=0x4b72c0 00:15:16.667 [2024-07-15 08:28:08.679245] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.667 [2024-07-15 08:28:08.679250] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.667 [2024-07-15 08:28:08.679254] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4b72c0) 00:15:16.667 [2024-07-15 08:28:08.679262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:16.667 [2024-07-15 08:28:08.679269] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.667 [2024-07-15 08:28:08.679274] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.667 [2024-07-15 08:28:08.679278] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x4b72c0) 00:15:16.667 [2024-07-15 08:28:08.679284] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:16.667 [2024-07-15 08:28:08.679292] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.667 [2024-07-15 08:28:08.679296] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.667 [2024-07-15 08:28:08.679300] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x4b72c0) 00:15:16.667 [2024-07-15 08:28:08.679307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:16.667 [2024-07-15 08:28:08.679314] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.667 [2024-07-15 08:28:08.679318] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.667 [2024-07-15 08:28:08.679322] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4b72c0) 00:15:16.667 [2024-07-15 08:28:08.679329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:16.667 [2024-07-15 08:28:08.679334] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:15:16.667 [2024-07-15 08:28:08.679349] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:16.667 [2024-07-15 08:28:08.679358] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.667 [2024-07-15 08:28:08.679363] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x4b72c0) 00:15:16.667 [2024-07-15 08:28:08.679370] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.667 [2024-07-15 08:28:08.679391] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x4f8940, cid 0, qid 0 00:15:16.667 [2024-07-15 08:28:08.679399] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x4f8ac0, cid 1, qid 0 00:15:16.667 [2024-07-15 08:28:08.679405] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x4f8c40, cid 2, qid 0 00:15:16.667 [2024-07-15 08:28:08.679410] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x4f8dc0, cid 3, qid 0 00:15:16.667 [2024-07-15 08:28:08.679415] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x4f8f40, cid 4, qid 0 00:15:16.667 [2024-07-15 08:28:08.679504] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.667 [2024-07-15 08:28:08.679512] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.667 [2024-07-15 08:28:08.679516] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.667 [2024-07-15 08:28:08.679520] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x4f8f40) on tqpair=0x4b72c0 00:15:16.667 [2024-07-15 08:28:08.679527] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:15:16.667 [2024-07-15 08:28:08.679537] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:16.667 [2024-07-15 08:28:08.679547] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:15:16.667 [2024-07-15 08:28:08.679555] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:15:16.667 [2024-07-15 08:28:08.679562] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.667 [2024-07-15 08:28:08.679567] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.667 [2024-07-15 08:28:08.679571] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x4b72c0) 00:15:16.667 [2024-07-15 08:28:08.679579] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:16.667 [2024-07-15 08:28:08.679597] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x4f8f40, cid 4, qid 0 00:15:16.667 [2024-07-15 08:28:08.679656] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.667 [2024-07-15 08:28:08.679664] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.667 [2024-07-15 08:28:08.679668] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.667 [2024-07-15 08:28:08.679673] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x4f8f40) on tqpair=0x4b72c0 00:15:16.667 [2024-07-15 08:28:08.679753] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:15:16.667 [2024-07-15 08:28:08.679768] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:15:16.667 [2024-07-15 08:28:08.679777] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.667 [2024-07-15 08:28:08.679782] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x4b72c0) 00:15:16.667 [2024-07-15 08:28:08.679790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.667 [2024-07-15 08:28:08.679811] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x4f8f40, cid 4, qid 0 00:15:16.667 [2024-07-15 08:28:08.679876] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:16.667 [2024-07-15 08:28:08.679883] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:16.667 [2024-07-15 08:28:08.679888] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:16.667 [2024-07-15 08:28:08.679892] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4b72c0): datao=0, datal=4096, cccid=4 00:15:16.667 [2024-07-15 08:28:08.679897] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x4f8f40) on tqpair(0x4b72c0): expected_datao=0, payload_size=4096 00:15:16.667 [2024-07-15 08:28:08.679902] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.667 [2024-07-15 08:28:08.679911] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:16.667 [2024-07-15 08:28:08.679916] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:16.667 [2024-07-15 08:28:08.679925] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.667 [2024-07-15 08:28:08.679932] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.667 [2024-07-15 08:28:08.679936] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.667 [2024-07-15 08:28:08.679940] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x4f8f40) on tqpair=0x4b72c0 00:15:16.667 [2024-07-15 08:28:08.679958] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:15:16.667 [2024-07-15 08:28:08.679970] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:15:16.667 [2024-07-15 08:28:08.679981] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:15:16.667 [2024-07-15 08:28:08.679990] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.668 [2024-07-15 08:28:08.679994] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x4b72c0) 00:15:16.668 [2024-07-15 08:28:08.680002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.668 [2024-07-15 08:28:08.680022] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x4f8f40, cid 4, qid 0 00:15:16.668 [2024-07-15 08:28:08.680099] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:16.668 [2024-07-15 08:28:08.680111] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:16.668 [2024-07-15 08:28:08.680116] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:16.668 [2024-07-15 08:28:08.680121] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4b72c0): datao=0, datal=4096, cccid=4 00:15:16.668 [2024-07-15 08:28:08.680126] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x4f8f40) on tqpair(0x4b72c0): expected_datao=0, payload_size=4096 00:15:16.668 [2024-07-15 08:28:08.680131] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.668 [2024-07-15 08:28:08.680139] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:16.668 [2024-07-15 08:28:08.680144] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:16.668 [2024-07-15 08:28:08.680153] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.668 [2024-07-15 08:28:08.680160] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.668 [2024-07-15 08:28:08.680164] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.668 [2024-07-15 08:28:08.680168] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x4f8f40) on tqpair=0x4b72c0 00:15:16.668 [2024-07-15 08:28:08.680185] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:16.668 [2024-07-15 08:28:08.680197] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:16.668 [2024-07-15 08:28:08.680206] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.668 [2024-07-15 08:28:08.680211] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x4b72c0) 00:15:16.668 [2024-07-15 08:28:08.680218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.668 [2024-07-15 08:28:08.680239] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x4f8f40, cid 4, qid 0 00:15:16.668 [2024-07-15 08:28:08.680302] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:16.668 [2024-07-15 08:28:08.680309] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:16.668 [2024-07-15 08:28:08.680313] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:16.668 [2024-07-15 08:28:08.680318] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4b72c0): datao=0, datal=4096, cccid=4 00:15:16.668 [2024-07-15 08:28:08.680323] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x4f8f40) on tqpair(0x4b72c0): expected_datao=0, payload_size=4096 00:15:16.668 [2024-07-15 08:28:08.680328] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.668 [2024-07-15 08:28:08.680335] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:16.668 [2024-07-15 08:28:08.680340] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:16.668 [2024-07-15 08:28:08.680349] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.668 [2024-07-15 08:28:08.680355] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.668 [2024-07-15 08:28:08.680360] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.668 [2024-07-15 08:28:08.680364] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x4f8f40) on tqpair=0x4b72c0 00:15:16.668 [2024-07-15 08:28:08.680373] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:16.668 [2024-07-15 08:28:08.680383] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:15:16.668 [2024-07-15 08:28:08.680395] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:15:16.668 [2024-07-15 08:28:08.680402] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:15:16.668 [2024-07-15 08:28:08.680409] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:16.668 [2024-07-15 08:28:08.680415] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:15:16.668 [2024-07-15 08:28:08.680421] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:15:16.668 [2024-07-15 08:28:08.680426] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:15:16.668 [2024-07-15 08:28:08.680432] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:15:16.668 [2024-07-15 08:28:08.680453] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.668 [2024-07-15 08:28:08.680459] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x4b72c0) 00:15:16.668 [2024-07-15 08:28:08.680466] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.668 [2024-07-15 08:28:08.680475] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.668 [2024-07-15 08:28:08.680479] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.668 [2024-07-15 08:28:08.680483] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x4b72c0) 00:15:16.668 [2024-07-15 08:28:08.680490] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:15:16.668 [2024-07-15 08:28:08.680516] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x4f8f40, cid 4, qid 0 00:15:16.668 [2024-07-15 08:28:08.680531] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x4f90c0, cid 5, qid 0 00:15:16.668 [2024-07-15 08:28:08.680591] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.668 [2024-07-15 08:28:08.680607] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.668 [2024-07-15 08:28:08.680613] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.668 [2024-07-15 08:28:08.680617] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x4f8f40) on tqpair=0x4b72c0 00:15:16.668 [2024-07-15 08:28:08.680625] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.668 [2024-07-15 08:28:08.680632] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.668 [2024-07-15 08:28:08.680636] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.668 [2024-07-15 08:28:08.680640] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x4f90c0) on tqpair=0x4b72c0 00:15:16.668 [2024-07-15 08:28:08.680651] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.668 [2024-07-15 08:28:08.680657] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x4b72c0) 00:15:16.668 [2024-07-15 08:28:08.680664] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.668 [2024-07-15 08:28:08.680685] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x4f90c0, cid 5, qid 0 00:15:16.668 [2024-07-15 08:28:08.680750] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.668 [2024-07-15 08:28:08.680759] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.668 [2024-07-15 08:28:08.680763] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.668 [2024-07-15 08:28:08.680768] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x4f90c0) on tqpair=0x4b72c0 00:15:16.668 [2024-07-15 08:28:08.680779] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.668 [2024-07-15 08:28:08.680784] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x4b72c0) 00:15:16.668 [2024-07-15 08:28:08.680792] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.668 [2024-07-15 08:28:08.680812] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x4f90c0, cid 5, qid 0 00:15:16.668 [2024-07-15 08:28:08.680859] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.668 [2024-07-15 08:28:08.680871] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.668 [2024-07-15 08:28:08.680876] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.668 [2024-07-15 08:28:08.680880] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x4f90c0) on tqpair=0x4b72c0 00:15:16.668 [2024-07-15 08:28:08.680892] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.668 [2024-07-15 08:28:08.680897] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x4b72c0) 00:15:16.668 [2024-07-15 08:28:08.680904] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.668 [2024-07-15 08:28:08.680923] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x4f90c0, cid 5, qid 0 00:15:16.668 [2024-07-15 08:28:08.680976] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.668 [2024-07-15 08:28:08.680983] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.668 [2024-07-15 08:28:08.680988] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.668 [2024-07-15 08:28:08.680992] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x4f90c0) on tqpair=0x4b72c0 00:15:16.668 [2024-07-15 08:28:08.681012] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.668 [2024-07-15 08:28:08.681018] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x4b72c0) 00:15:16.668 [2024-07-15 08:28:08.681026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.668 [2024-07-15 08:28:08.681034] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.668 [2024-07-15 08:28:08.681039] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x4b72c0) 00:15:16.668 [2024-07-15 08:28:08.681046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.668 [2024-07-15 08:28:08.681055] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.668 [2024-07-15 08:28:08.681059] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x4b72c0) 00:15:16.668 [2024-07-15 08:28:08.681066] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.668 [2024-07-15 08:28:08.681078] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.668 [2024-07-15 08:28:08.681083] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x4b72c0) 00:15:16.668 [2024-07-15 08:28:08.681090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.668 [2024-07-15 08:28:08.681111] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x4f90c0, cid 5, qid 0 00:15:16.668 [2024-07-15 08:28:08.681119] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x4f8f40, cid 4, qid 0 00:15:16.668 [2024-07-15 08:28:08.681124] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x4f9240, cid 6, qid 0 00:15:16.668 [2024-07-15 08:28:08.681129] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x4f93c0, cid 7, qid 0 00:15:16.668 [2024-07-15 08:28:08.681267] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:16.668 [2024-07-15 08:28:08.681282] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:16.668 [2024-07-15 08:28:08.681287] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:16.668 [2024-07-15 08:28:08.681292] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4b72c0): datao=0, datal=8192, cccid=5 00:15:16.668 [2024-07-15 08:28:08.681297] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x4f90c0) on tqpair(0x4b72c0): expected_datao=0, payload_size=8192 00:15:16.668 [2024-07-15 08:28:08.681302] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.669 [2024-07-15 08:28:08.681321] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:16.669 [2024-07-15 08:28:08.681326] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:16.669 [2024-07-15 08:28:08.681333] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:16.669 [2024-07-15 08:28:08.681339] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:16.669 [2024-07-15 08:28:08.681343] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:16.669 [2024-07-15 08:28:08.681347] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4b72c0): datao=0, datal=512, cccid=4 00:15:16.669 [2024-07-15 08:28:08.681353] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x4f8f40) on tqpair(0x4b72c0): expected_datao=0, payload_size=512 00:15:16.669 [2024-07-15 08:28:08.681358] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.669 [2024-07-15 08:28:08.681365] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:16.669 [2024-07-15 08:28:08.681369] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:16.669 [2024-07-15 08:28:08.681375] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:16.669 [2024-07-15 08:28:08.681381] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:16.669 [2024-07-15 08:28:08.681385] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:16.669 [2024-07-15 08:28:08.681389] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4b72c0): datao=0, datal=512, cccid=6 00:15:16.669 [2024-07-15 08:28:08.681394] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x4f9240) on tqpair(0x4b72c0): expected_datao=0, payload_size=512 00:15:16.669 [2024-07-15 08:28:08.681400] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.669 [2024-07-15 08:28:08.681406] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:16.669 [2024-07-15 08:28:08.681411] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:16.669 [2024-07-15 08:28:08.681417] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:16.669 [2024-07-15 08:28:08.681423] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:16.669 [2024-07-15 08:28:08.681427] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:16.669 [2024-07-15 08:28:08.681431] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4b72c0): datao=0, datal=4096, cccid=7 00:15:16.669 [2024-07-15 08:28:08.681436] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x4f93c0) on tqpair(0x4b72c0): expected_datao=0, payload_size=4096 00:15:16.669 [2024-07-15 08:28:08.681442] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.669 [2024-07-15 08:28:08.681449] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:16.669 [2024-07-15 08:28:08.681453] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:16.669 [2024-07-15 08:28:08.681459] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.669 [2024-07-15 08:28:08.681465] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.669 [2024-07-15 08:28:08.681469] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.669 [2024-07-15 08:28:08.681474] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x4f90c0) on tqpair=0x4b72c0 00:15:16.669 [2024-07-15 08:28:08.681493] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.669 [2024-07-15 08:28:08.681501] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.669 [2024-07-15 08:28:08.681505] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.669 [2024-07-15 08:28:08.681509] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x4f8f40) on tqpair=0x4b72c0 00:15:16.669 [2024-07-15 08:28:08.681523] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.669 [2024-07-15 08:28:08.681530] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.669 [2024-07-15 08:28:08.681534] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.669 [2024-07-15 08:28:08.681538] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x4f9240) on tqpair=0x4b72c0 00:15:16.669 [2024-07-15 08:28:08.681546] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.669 [2024-07-15 08:28:08.681553] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.669 [2024-07-15 08:28:08.681557] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.669 [2024-07-15 08:28:08.681562] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x4f93c0) on tqpair=0x4b72c0 00:15:16.669 ===================================================== 00:15:16.669 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:16.669 ===================================================== 00:15:16.669 Controller Capabilities/Features 00:15:16.669 ================================ 00:15:16.669 Vendor ID: 8086 00:15:16.669 Subsystem Vendor ID: 8086 00:15:16.669 Serial Number: SPDK00000000000001 00:15:16.669 Model Number: SPDK bdev Controller 00:15:16.669 Firmware Version: 24.09 00:15:16.669 Recommended Arb Burst: 6 00:15:16.669 IEEE OUI Identifier: e4 d2 5c 00:15:16.669 Multi-path I/O 00:15:16.669 May have multiple subsystem ports: Yes 00:15:16.669 May have multiple controllers: Yes 00:15:16.669 Associated with SR-IOV VF: No 00:15:16.669 Max Data Transfer Size: 131072 00:15:16.669 Max Number of Namespaces: 32 00:15:16.669 Max Number of I/O Queues: 127 00:15:16.669 NVMe Specification Version (VS): 1.3 00:15:16.669 NVMe Specification Version (Identify): 1.3 00:15:16.669 Maximum Queue Entries: 128 00:15:16.669 Contiguous Queues Required: Yes 00:15:16.669 Arbitration Mechanisms Supported 00:15:16.669 Weighted Round Robin: Not Supported 00:15:16.669 Vendor Specific: Not Supported 00:15:16.669 Reset Timeout: 15000 ms 00:15:16.669 Doorbell Stride: 4 bytes 00:15:16.669 NVM Subsystem Reset: Not Supported 00:15:16.669 Command Sets Supported 00:15:16.669 NVM Command Set: Supported 00:15:16.669 Boot Partition: Not Supported 00:15:16.669 Memory Page Size Minimum: 4096 bytes 00:15:16.669 Memory Page Size Maximum: 4096 bytes 00:15:16.669 Persistent Memory Region: Not Supported 00:15:16.669 Optional Asynchronous Events Supported 00:15:16.669 Namespace Attribute Notices: Supported 00:15:16.669 Firmware Activation Notices: Not Supported 00:15:16.669 ANA Change Notices: Not Supported 00:15:16.669 PLE Aggregate Log Change Notices: Not Supported 00:15:16.669 LBA Status Info Alert Notices: Not Supported 00:15:16.669 EGE Aggregate Log Change Notices: Not Supported 00:15:16.669 Normal NVM Subsystem Shutdown event: Not Supported 00:15:16.669 Zone Descriptor Change Notices: Not Supported 00:15:16.669 Discovery Log Change Notices: Not Supported 00:15:16.669 Controller Attributes 00:15:16.669 128-bit Host Identifier: Supported 00:15:16.669 Non-Operational Permissive Mode: Not Supported 00:15:16.669 NVM Sets: Not Supported 00:15:16.669 Read Recovery Levels: Not Supported 00:15:16.669 Endurance Groups: Not Supported 00:15:16.669 Predictable Latency Mode: Not Supported 00:15:16.669 Traffic Based Keep ALive: Not Supported 00:15:16.669 Namespace Granularity: Not Supported 00:15:16.669 SQ Associations: Not Supported 00:15:16.669 UUID List: Not Supported 00:15:16.669 Multi-Domain Subsystem: Not Supported 00:15:16.669 Fixed Capacity Management: Not Supported 00:15:16.669 Variable Capacity Management: Not Supported 00:15:16.669 Delete Endurance Group: Not Supported 00:15:16.669 Delete NVM Set: Not Supported 00:15:16.669 Extended LBA Formats Supported: Not Supported 00:15:16.669 Flexible Data Placement Supported: Not Supported 00:15:16.669 00:15:16.669 Controller Memory Buffer Support 00:15:16.669 ================================ 00:15:16.669 Supported: No 00:15:16.669 00:15:16.669 Persistent Memory Region Support 00:15:16.669 ================================ 00:15:16.669 Supported: No 00:15:16.669 00:15:16.669 Admin Command Set Attributes 00:15:16.669 ============================ 00:15:16.669 Security Send/Receive: Not Supported 00:15:16.669 Format NVM: Not Supported 00:15:16.669 Firmware Activate/Download: Not Supported 00:15:16.669 Namespace Management: Not Supported 00:15:16.670 Device Self-Test: Not Supported 00:15:16.670 Directives: Not Supported 00:15:16.670 NVMe-MI: Not Supported 00:15:16.670 Virtualization Management: Not Supported 00:15:16.670 Doorbell Buffer Config: Not Supported 00:15:16.670 Get LBA Status Capability: Not Supported 00:15:16.670 Command & Feature Lockdown Capability: Not Supported 00:15:16.670 Abort Command Limit: 4 00:15:16.670 Async Event Request Limit: 4 00:15:16.670 Number of Firmware Slots: N/A 00:15:16.670 Firmware Slot 1 Read-Only: N/A 00:15:16.670 Firmware Activation Without Reset: N/A 00:15:16.670 Multiple Update Detection Support: N/A 00:15:16.670 Firmware Update Granularity: No Information Provided 00:15:16.670 Per-Namespace SMART Log: No 00:15:16.670 Asymmetric Namespace Access Log Page: Not Supported 00:15:16.670 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:15:16.670 Command Effects Log Page: Supported 00:15:16.670 Get Log Page Extended Data: Supported 00:15:16.670 Telemetry Log Pages: Not Supported 00:15:16.670 Persistent Event Log Pages: Not Supported 00:15:16.670 Supported Log Pages Log Page: May Support 00:15:16.670 Commands Supported & Effects Log Page: Not Supported 00:15:16.670 Feature Identifiers & Effects Log Page:May Support 00:15:16.670 NVMe-MI Commands & Effects Log Page: May Support 00:15:16.670 Data Area 4 for Telemetry Log: Not Supported 00:15:16.670 Error Log Page Entries Supported: 128 00:15:16.670 Keep Alive: Supported 00:15:16.670 Keep Alive Granularity: 10000 ms 00:15:16.670 00:15:16.670 NVM Command Set Attributes 00:15:16.670 ========================== 00:15:16.670 Submission Queue Entry Size 00:15:16.670 Max: 64 00:15:16.670 Min: 64 00:15:16.670 Completion Queue Entry Size 00:15:16.670 Max: 16 00:15:16.670 Min: 16 00:15:16.670 Number of Namespaces: 32 00:15:16.670 Compare Command: Supported 00:15:16.670 Write Uncorrectable Command: Not Supported 00:15:16.670 Dataset Management Command: Supported 00:15:16.670 Write Zeroes Command: Supported 00:15:16.670 Set Features Save Field: Not Supported 00:15:16.670 Reservations: Supported 00:15:16.670 Timestamp: Not Supported 00:15:16.670 Copy: Supported 00:15:16.670 Volatile Write Cache: Present 00:15:16.670 Atomic Write Unit (Normal): 1 00:15:16.670 Atomic Write Unit (PFail): 1 00:15:16.670 Atomic Compare & Write Unit: 1 00:15:16.670 Fused Compare & Write: Supported 00:15:16.670 Scatter-Gather List 00:15:16.670 SGL Command Set: Supported 00:15:16.670 SGL Keyed: Supported 00:15:16.670 SGL Bit Bucket Descriptor: Not Supported 00:15:16.670 SGL Metadata Pointer: Not Supported 00:15:16.670 Oversized SGL: Not Supported 00:15:16.670 SGL Metadata Address: Not Supported 00:15:16.670 SGL Offset: Supported 00:15:16.670 Transport SGL Data Block: Not Supported 00:15:16.670 Replay Protected Memory Block: Not Supported 00:15:16.670 00:15:16.670 Firmware Slot Information 00:15:16.670 ========================= 00:15:16.670 Active slot: 1 00:15:16.670 Slot 1 Firmware Revision: 24.09 00:15:16.670 00:15:16.670 00:15:16.670 Commands Supported and Effects 00:15:16.670 ============================== 00:15:16.670 Admin Commands 00:15:16.670 -------------- 00:15:16.670 Get Log Page (02h): Supported 00:15:16.670 Identify (06h): Supported 00:15:16.670 Abort (08h): Supported 00:15:16.670 Set Features (09h): Supported 00:15:16.670 Get Features (0Ah): Supported 00:15:16.670 Asynchronous Event Request (0Ch): Supported 00:15:16.670 Keep Alive (18h): Supported 00:15:16.670 I/O Commands 00:15:16.670 ------------ 00:15:16.670 Flush (00h): Supported LBA-Change 00:15:16.670 Write (01h): Supported LBA-Change 00:15:16.670 Read (02h): Supported 00:15:16.670 Compare (05h): Supported 00:15:16.670 Write Zeroes (08h): Supported LBA-Change 00:15:16.670 Dataset Management (09h): Supported LBA-Change 00:15:16.670 Copy (19h): Supported LBA-Change 00:15:16.670 00:15:16.670 Error Log 00:15:16.670 ========= 00:15:16.670 00:15:16.670 Arbitration 00:15:16.670 =========== 00:15:16.670 Arbitration Burst: 1 00:15:16.670 00:15:16.670 Power Management 00:15:16.670 ================ 00:15:16.670 Number of Power States: 1 00:15:16.670 Current Power State: Power State #0 00:15:16.670 Power State #0: 00:15:16.670 Max Power: 0.00 W 00:15:16.670 Non-Operational State: Operational 00:15:16.670 Entry Latency: Not Reported 00:15:16.670 Exit Latency: Not Reported 00:15:16.670 Relative Read Throughput: 0 00:15:16.670 Relative Read Latency: 0 00:15:16.670 Relative Write Throughput: 0 00:15:16.670 Relative Write Latency: 0 00:15:16.670 Idle Power: Not Reported 00:15:16.670 Active Power: Not Reported 00:15:16.670 Non-Operational Permissive Mode: Not Supported 00:15:16.670 00:15:16.670 Health Information 00:15:16.670 ================== 00:15:16.670 Critical Warnings: 00:15:16.670 Available Spare Space: OK 00:15:16.670 Temperature: OK 00:15:16.670 Device Reliability: OK 00:15:16.670 Read Only: No 00:15:16.670 Volatile Memory Backup: OK 00:15:16.670 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:16.670 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:16.670 Available Spare: 0% 00:15:16.670 Available Spare Threshold: 0% 00:15:16.670 Life Percentage Used:[2024-07-15 08:28:08.681672] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.670 [2024-07-15 08:28:08.681680] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x4b72c0) 00:15:16.670 [2024-07-15 08:28:08.681688] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.670 [2024-07-15 08:28:08.681713] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x4f93c0, cid 7, qid 0 00:15:16.670 [2024-07-15 08:28:08.681781] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.670 [2024-07-15 08:28:08.681789] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.670 [2024-07-15 08:28:08.681793] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.670 [2024-07-15 08:28:08.681798] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x4f93c0) on tqpair=0x4b72c0 00:15:16.670 [2024-07-15 08:28:08.681839] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:15:16.670 [2024-07-15 08:28:08.681851] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x4f8940) on tqpair=0x4b72c0 00:15:16.670 [2024-07-15 08:28:08.681859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.670 [2024-07-15 08:28:08.681865] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x4f8ac0) on tqpair=0x4b72c0 00:15:16.670 [2024-07-15 08:28:08.681870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.670 [2024-07-15 08:28:08.681876] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x4f8c40) on tqpair=0x4b72c0 00:15:16.670 [2024-07-15 08:28:08.681881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.670 [2024-07-15 08:28:08.681887] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x4f8dc0) on tqpair=0x4b72c0 00:15:16.670 [2024-07-15 08:28:08.681892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.670 [2024-07-15 08:28:08.681901] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.670 [2024-07-15 08:28:08.681907] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.670 [2024-07-15 08:28:08.681911] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4b72c0) 00:15:16.670 [2024-07-15 08:28:08.681919] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.670 [2024-07-15 08:28:08.681943] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x4f8dc0, cid 3, qid 0 00:15:16.670 [2024-07-15 08:28:08.681991] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.670 [2024-07-15 08:28:08.681999] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.670 [2024-07-15 08:28:08.682003] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.670 [2024-07-15 08:28:08.682008] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x4f8dc0) on tqpair=0x4b72c0 00:15:16.671 [2024-07-15 08:28:08.682017] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.671 [2024-07-15 08:28:08.682022] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.671 [2024-07-15 08:28:08.682026] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4b72c0) 00:15:16.671 [2024-07-15 08:28:08.682034] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.671 [2024-07-15 08:28:08.682056] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x4f8dc0, cid 3, qid 0 00:15:16.671 [2024-07-15 08:28:08.682130] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.671 [2024-07-15 08:28:08.682137] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.671 [2024-07-15 08:28:08.682141] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.671 [2024-07-15 08:28:08.682146] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x4f8dc0) on tqpair=0x4b72c0 00:15:16.671 [2024-07-15 08:28:08.682151] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:15:16.671 [2024-07-15 08:28:08.682157] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:15:16.671 [2024-07-15 08:28:08.682167] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.671 [2024-07-15 08:28:08.682173] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.671 [2024-07-15 08:28:08.682177] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4b72c0) 00:15:16.671 [2024-07-15 08:28:08.682185] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.671 [2024-07-15 08:28:08.682203] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x4f8dc0, cid 3, qid 0 00:15:16.671 [2024-07-15 08:28:08.682252] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.671 [2024-07-15 08:28:08.682259] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.671 [2024-07-15 08:28:08.682263] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.671 [2024-07-15 08:28:08.682268] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x4f8dc0) on tqpair=0x4b72c0 00:15:16.671 [2024-07-15 08:28:08.682279] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.671 [2024-07-15 08:28:08.682285] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.671 [2024-07-15 08:28:08.682289] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4b72c0) 00:15:16.671 [2024-07-15 08:28:08.682297] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.671 [2024-07-15 08:28:08.682314] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x4f8dc0, cid 3, qid 0 00:15:16.671 [2024-07-15 08:28:08.682363] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.671 [2024-07-15 08:28:08.682370] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.671 [2024-07-15 08:28:08.682374] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.671 [2024-07-15 08:28:08.682379] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x4f8dc0) on tqpair=0x4b72c0 00:15:16.671 [2024-07-15 08:28:08.682390] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.671 [2024-07-15 08:28:08.682395] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.671 [2024-07-15 08:28:08.682400] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4b72c0) 00:15:16.671 [2024-07-15 08:28:08.682408] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.671 [2024-07-15 08:28:08.682425] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x4f8dc0, cid 3, qid 0 00:15:16.671 [2024-07-15 08:28:08.682474] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.671 [2024-07-15 08:28:08.682486] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.671 [2024-07-15 08:28:08.682491] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.671 [2024-07-15 08:28:08.682495] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x4f8dc0) on tqpair=0x4b72c0 00:15:16.671 [2024-07-15 08:28:08.682507] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.671 [2024-07-15 08:28:08.682512] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.671 [2024-07-15 08:28:08.682516] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4b72c0) 00:15:16.671 [2024-07-15 08:28:08.682524] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.671 [2024-07-15 08:28:08.682543] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x4f8dc0, cid 3, qid 0 00:15:16.671 [2024-07-15 08:28:08.682588] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.671 [2024-07-15 08:28:08.682600] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.671 [2024-07-15 08:28:08.682604] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.671 [2024-07-15 08:28:08.682609] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x4f8dc0) on tqpair=0x4b72c0 00:15:16.671 [2024-07-15 08:28:08.682620] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.671 [2024-07-15 08:28:08.682625] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.671 [2024-07-15 08:28:08.682630] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4b72c0) 00:15:16.671 [2024-07-15 08:28:08.682637] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.671 [2024-07-15 08:28:08.682656] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x4f8dc0, cid 3, qid 0 00:15:16.671 [2024-07-15 08:28:08.682702] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.671 [2024-07-15 08:28:08.682709] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.671 [2024-07-15 08:28:08.682714] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.671 [2024-07-15 08:28:08.682729] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x4f8dc0) on tqpair=0x4b72c0 00:15:16.671 [2024-07-15 08:28:08.682742] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.671 [2024-07-15 08:28:08.682748] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.671 [2024-07-15 08:28:08.682752] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4b72c0) 00:15:16.671 [2024-07-15 08:28:08.682760] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.671 [2024-07-15 08:28:08.682780] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x4f8dc0, cid 3, qid 0 00:15:16.671 [2024-07-15 08:28:08.682835] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.671 [2024-07-15 08:28:08.682842] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.671 [2024-07-15 08:28:08.682846] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.671 [2024-07-15 08:28:08.682850] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x4f8dc0) on tqpair=0x4b72c0 00:15:16.671 [2024-07-15 08:28:08.682861] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.671 [2024-07-15 08:28:08.682866] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.671 [2024-07-15 08:28:08.682871] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4b72c0) 00:15:16.671 [2024-07-15 08:28:08.682878] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.671 [2024-07-15 08:28:08.682896] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x4f8dc0, cid 3, qid 0 00:15:16.671 [2024-07-15 08:28:08.682942] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.671 [2024-07-15 08:28:08.682954] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.671 [2024-07-15 08:28:08.682959] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.671 [2024-07-15 08:28:08.682963] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x4f8dc0) on tqpair=0x4b72c0 00:15:16.671 [2024-07-15 08:28:08.682974] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.671 [2024-07-15 08:28:08.682980] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.671 [2024-07-15 08:28:08.682984] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4b72c0) 00:15:16.671 [2024-07-15 08:28:08.682992] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.671 [2024-07-15 08:28:08.683010] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x4f8dc0, cid 3, qid 0 00:15:16.671 [2024-07-15 08:28:08.683071] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.671 [2024-07-15 08:28:08.683086] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.671 [2024-07-15 08:28:08.683091] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.671 [2024-07-15 08:28:08.683096] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x4f8dc0) on tqpair=0x4b72c0 00:15:16.671 [2024-07-15 08:28:08.683109] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.671 [2024-07-15 08:28:08.683114] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.671 [2024-07-15 08:28:08.683119] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4b72c0) 00:15:16.671 [2024-07-15 08:28:08.683127] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.671 [2024-07-15 08:28:08.683147] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x4f8dc0, cid 3, qid 0 00:15:16.671 [2024-07-15 08:28:08.683192] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.671 [2024-07-15 08:28:08.683202] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.671 [2024-07-15 08:28:08.683207] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.671 [2024-07-15 08:28:08.683211] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x4f8dc0) on tqpair=0x4b72c0 00:15:16.672 [2024-07-15 08:28:08.683223] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.672 [2024-07-15 08:28:08.683228] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.672 [2024-07-15 08:28:08.683232] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4b72c0) 00:15:16.672 [2024-07-15 08:28:08.683240] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.672 [2024-07-15 08:28:08.683259] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x4f8dc0, cid 3, qid 0 00:15:16.672 [2024-07-15 08:28:08.683373] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.672 [2024-07-15 08:28:08.683384] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.672 [2024-07-15 08:28:08.683389] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.672 [2024-07-15 08:28:08.683394] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x4f8dc0) on tqpair=0x4b72c0 00:15:16.672 [2024-07-15 08:28:08.683405] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.672 [2024-07-15 08:28:08.683410] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.672 [2024-07-15 08:28:08.683414] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4b72c0) 00:15:16.672 [2024-07-15 08:28:08.683422] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.672 [2024-07-15 08:28:08.683441] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x4f8dc0, cid 3, qid 0 00:15:16.672 [2024-07-15 08:28:08.683490] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.672 [2024-07-15 08:28:08.683497] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.672 [2024-07-15 08:28:08.683502] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.672 [2024-07-15 08:28:08.683506] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x4f8dc0) on tqpair=0x4b72c0 00:15:16.672 [2024-07-15 08:28:08.683517] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.672 [2024-07-15 08:28:08.683522] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.672 [2024-07-15 08:28:08.683526] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4b72c0) 00:15:16.672 [2024-07-15 08:28:08.683534] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.672 [2024-07-15 08:28:08.683557] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x4f8dc0, cid 3, qid 0 00:15:16.672 [2024-07-15 08:28:08.683606] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.672 [2024-07-15 08:28:08.683613] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.672 [2024-07-15 08:28:08.683618] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.672 [2024-07-15 08:28:08.683622] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x4f8dc0) on tqpair=0x4b72c0 00:15:16.672 [2024-07-15 08:28:08.683633] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.672 [2024-07-15 08:28:08.683638] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.672 [2024-07-15 08:28:08.683642] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4b72c0) 00:15:16.672 [2024-07-15 08:28:08.683650] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.672 [2024-07-15 08:28:08.683667] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x4f8dc0, cid 3, qid 0 00:15:16.672 [2024-07-15 08:28:08.683736] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.672 [2024-07-15 08:28:08.683746] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.672 [2024-07-15 08:28:08.683750] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.672 [2024-07-15 08:28:08.683754] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x4f8dc0) on tqpair=0x4b72c0 00:15:16.672 [2024-07-15 08:28:08.683766] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.672 [2024-07-15 08:28:08.683771] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.672 [2024-07-15 08:28:08.683775] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4b72c0) 00:15:16.672 [2024-07-15 08:28:08.683783] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.672 [2024-07-15 08:28:08.683804] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x4f8dc0, cid 3, qid 0 00:15:16.672 [2024-07-15 08:28:08.683850] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.672 [2024-07-15 08:28:08.683857] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.672 [2024-07-15 08:28:08.683861] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.672 [2024-07-15 08:28:08.683866] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x4f8dc0) on tqpair=0x4b72c0 00:15:16.672 [2024-07-15 08:28:08.683877] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.672 [2024-07-15 08:28:08.683882] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.672 [2024-07-15 08:28:08.683887] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4b72c0) 00:15:16.672 [2024-07-15 08:28:08.683895] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.672 [2024-07-15 08:28:08.683912] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x4f8dc0, cid 3, qid 0 00:15:16.672 [2024-07-15 08:28:08.683970] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.672 [2024-07-15 08:28:08.683977] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.672 [2024-07-15 08:28:08.683982] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.672 [2024-07-15 08:28:08.683986] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x4f8dc0) on tqpair=0x4b72c0 00:15:16.672 [2024-07-15 08:28:08.683997] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.672 [2024-07-15 08:28:08.684002] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.672 [2024-07-15 08:28:08.684007] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4b72c0) 00:15:16.672 [2024-07-15 08:28:08.684014] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.672 [2024-07-15 08:28:08.684032] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x4f8dc0, cid 3, qid 0 00:15:16.672 [2024-07-15 08:28:08.684082] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.672 [2024-07-15 08:28:08.684089] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.672 [2024-07-15 08:28:08.684093] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.672 [2024-07-15 08:28:08.684098] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x4f8dc0) on tqpair=0x4b72c0 00:15:16.672 [2024-07-15 08:28:08.684109] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.672 [2024-07-15 08:28:08.684114] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.672 [2024-07-15 08:28:08.684118] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4b72c0) 00:15:16.672 [2024-07-15 08:28:08.684126] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.672 [2024-07-15 08:28:08.684143] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x4f8dc0, cid 3, qid 0 00:15:16.672 [2024-07-15 08:28:08.684189] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.672 [2024-07-15 08:28:08.684196] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.672 [2024-07-15 08:28:08.684200] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.672 [2024-07-15 08:28:08.684205] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x4f8dc0) on tqpair=0x4b72c0 00:15:16.672 [2024-07-15 08:28:08.684216] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.672 [2024-07-15 08:28:08.684221] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.672 [2024-07-15 08:28:08.684225] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4b72c0) 00:15:16.672 [2024-07-15 08:28:08.684232] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.672 [2024-07-15 08:28:08.684250] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x4f8dc0, cid 3, qid 0 00:15:16.672 [2024-07-15 08:28:08.684297] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.672 [2024-07-15 08:28:08.684304] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.672 [2024-07-15 08:28:08.684308] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.672 [2024-07-15 08:28:08.684312] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x4f8dc0) on tqpair=0x4b72c0 00:15:16.672 [2024-07-15 08:28:08.684323] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.672 [2024-07-15 08:28:08.684328] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.672 [2024-07-15 08:28:08.684332] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4b72c0) 00:15:16.672 [2024-07-15 08:28:08.684340] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.672 [2024-07-15 08:28:08.684358] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x4f8dc0, cid 3, qid 0 00:15:16.672 [2024-07-15 08:28:08.684406] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.672 [2024-07-15 08:28:08.684414] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.672 [2024-07-15 08:28:08.684418] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.672 [2024-07-15 08:28:08.684422] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x4f8dc0) on tqpair=0x4b72c0 00:15:16.672 [2024-07-15 08:28:08.684433] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.672 [2024-07-15 08:28:08.684438] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.672 [2024-07-15 08:28:08.684442] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4b72c0) 00:15:16.672 [2024-07-15 08:28:08.684450] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.672 [2024-07-15 08:28:08.684468] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x4f8dc0, cid 3, qid 0 00:15:16.672 [2024-07-15 08:28:08.684513] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.672 [2024-07-15 08:28:08.684521] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.672 [2024-07-15 08:28:08.684526] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.672 [2024-07-15 08:28:08.684530] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x4f8dc0) on tqpair=0x4b72c0 00:15:16.672 [2024-07-15 08:28:08.684541] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.672 [2024-07-15 08:28:08.684546] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.672 [2024-07-15 08:28:08.684551] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4b72c0) 00:15:16.672 [2024-07-15 08:28:08.684559] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.672 [2024-07-15 08:28:08.684577] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x4f8dc0, cid 3, qid 0 00:15:16.672 [2024-07-15 08:28:08.684631] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.672 [2024-07-15 08:28:08.684639] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.672 [2024-07-15 08:28:08.684643] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.672 [2024-07-15 08:28:08.684647] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x4f8dc0) on tqpair=0x4b72c0 00:15:16.672 [2024-07-15 08:28:08.684658] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.672 [2024-07-15 08:28:08.684663] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.672 [2024-07-15 08:28:08.684667] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4b72c0) 00:15:16.672 [2024-07-15 08:28:08.684675] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.672 [2024-07-15 08:28:08.684692] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x4f8dc0, cid 3, qid 0 00:15:16.672 [2024-07-15 08:28:08.684759] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.672 [2024-07-15 08:28:08.684769] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.673 [2024-07-15 08:28:08.684773] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.673 [2024-07-15 08:28:08.684778] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x4f8dc0) on tqpair=0x4b72c0 00:15:16.673 [2024-07-15 08:28:08.684789] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.673 [2024-07-15 08:28:08.684794] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.673 [2024-07-15 08:28:08.684799] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4b72c0) 00:15:16.673 [2024-07-15 08:28:08.684806] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.673 [2024-07-15 08:28:08.684827] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x4f8dc0, cid 3, qid 0 00:15:16.673 [2024-07-15 08:28:08.684877] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.673 [2024-07-15 08:28:08.684884] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.673 [2024-07-15 08:28:08.684888] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.673 [2024-07-15 08:28:08.684893] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x4f8dc0) on tqpair=0x4b72c0 00:15:16.673 [2024-07-15 08:28:08.684904] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.673 [2024-07-15 08:28:08.684909] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.673 [2024-07-15 08:28:08.684913] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4b72c0) 00:15:16.673 [2024-07-15 08:28:08.684921] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.673 [2024-07-15 08:28:08.684939] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x4f8dc0, cid 3, qid 0 00:15:16.673 [2024-07-15 08:28:08.684985] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.673 [2024-07-15 08:28:08.684996] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.673 [2024-07-15 08:28:08.685001] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.673 [2024-07-15 08:28:08.685006] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x4f8dc0) on tqpair=0x4b72c0 00:15:16.673 [2024-07-15 08:28:08.685017] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.673 [2024-07-15 08:28:08.685022] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.673 [2024-07-15 08:28:08.685027] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4b72c0) 00:15:16.673 [2024-07-15 08:28:08.685035] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.673 [2024-07-15 08:28:08.685053] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x4f8dc0, cid 3, qid 0 00:15:16.673 [2024-07-15 08:28:08.685099] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.673 [2024-07-15 08:28:08.685110] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.673 [2024-07-15 08:28:08.685115] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.673 [2024-07-15 08:28:08.685119] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x4f8dc0) on tqpair=0x4b72c0 00:15:16.673 [2024-07-15 08:28:08.685131] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.673 [2024-07-15 08:28:08.685136] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.673 [2024-07-15 08:28:08.685140] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4b72c0) 00:15:16.673 [2024-07-15 08:28:08.685148] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.673 [2024-07-15 08:28:08.685166] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x4f8dc0, cid 3, qid 0 00:15:16.673 [2024-07-15 08:28:08.685212] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.673 [2024-07-15 08:28:08.685219] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.673 [2024-07-15 08:28:08.685224] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.673 [2024-07-15 08:28:08.685228] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x4f8dc0) on tqpair=0x4b72c0 00:15:16.673 [2024-07-15 08:28:08.685239] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.673 [2024-07-15 08:28:08.685244] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.673 [2024-07-15 08:28:08.685249] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4b72c0) 00:15:16.673 [2024-07-15 08:28:08.685257] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.673 [2024-07-15 08:28:08.685274] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x4f8dc0, cid 3, qid 0 00:15:16.673 [2024-07-15 08:28:08.685326] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.673 [2024-07-15 08:28:08.685337] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.673 [2024-07-15 08:28:08.685342] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.673 [2024-07-15 08:28:08.685347] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x4f8dc0) on tqpair=0x4b72c0 00:15:16.673 [2024-07-15 08:28:08.685358] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.673 [2024-07-15 08:28:08.685363] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.673 [2024-07-15 08:28:08.685367] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4b72c0) 00:15:16.673 [2024-07-15 08:28:08.685375] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.673 [2024-07-15 08:28:08.685394] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x4f8dc0, cid 3, qid 0 00:15:16.673 [2024-07-15 08:28:08.685439] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.673 [2024-07-15 08:28:08.685447] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.673 [2024-07-15 08:28:08.685451] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.673 [2024-07-15 08:28:08.685455] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x4f8dc0) on tqpair=0x4b72c0 00:15:16.673 [2024-07-15 08:28:08.685466] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.673 [2024-07-15 08:28:08.685471] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.673 [2024-07-15 08:28:08.685476] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4b72c0) 00:15:16.673 [2024-07-15 08:28:08.685483] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.673 [2024-07-15 08:28:08.685501] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x4f8dc0, cid 3, qid 0 00:15:16.673 [2024-07-15 08:28:08.685554] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.673 [2024-07-15 08:28:08.685565] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.673 [2024-07-15 08:28:08.685570] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.673 [2024-07-15 08:28:08.685574] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x4f8dc0) on tqpair=0x4b72c0 00:15:16.673 [2024-07-15 08:28:08.685586] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.673 [2024-07-15 08:28:08.685591] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.673 [2024-07-15 08:28:08.685595] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4b72c0) 00:15:16.673 [2024-07-15 08:28:08.685603] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.673 [2024-07-15 08:28:08.685622] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x4f8dc0, cid 3, qid 0 00:15:16.673 [2024-07-15 08:28:08.685665] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.673 [2024-07-15 08:28:08.685673] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.673 [2024-07-15 08:28:08.685677] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.673 [2024-07-15 08:28:08.685682] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x4f8dc0) on tqpair=0x4b72c0 00:15:16.673 [2024-07-15 08:28:08.685693] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.673 [2024-07-15 08:28:08.685698] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.673 [2024-07-15 08:28:08.685702] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4b72c0) 00:15:16.673 [2024-07-15 08:28:08.685710] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.673 [2024-07-15 08:28:08.689779] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x4f8dc0, cid 3, qid 0 00:15:16.673 [2024-07-15 08:28:08.689834] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.673 [2024-07-15 08:28:08.689844] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.673 [2024-07-15 08:28:08.689849] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.673 [2024-07-15 08:28:08.689854] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x4f8dc0) on tqpair=0x4b72c0 00:15:16.673 [2024-07-15 08:28:08.689864] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:15:16.673 0% 00:15:16.673 Data Units Read: 0 00:15:16.673 Data Units Written: 0 00:15:16.673 Host Read Commands: 0 00:15:16.673 Host Write Commands: 0 00:15:16.673 Controller Busy Time: 0 minutes 00:15:16.673 Power Cycles: 0 00:15:16.673 Power On Hours: 0 hours 00:15:16.673 Unsafe Shutdowns: 0 00:15:16.673 Unrecoverable Media Errors: 0 00:15:16.673 Lifetime Error Log Entries: 0 00:15:16.673 Warning Temperature Time: 0 minutes 00:15:16.673 Critical Temperature Time: 0 minutes 00:15:16.673 00:15:16.673 Number of Queues 00:15:16.673 ================ 00:15:16.673 Number of I/O Submission Queues: 127 00:15:16.673 Number of I/O Completion Queues: 127 00:15:16.673 00:15:16.673 Active Namespaces 00:15:16.673 ================= 00:15:16.673 Namespace ID:1 00:15:16.673 Error Recovery Timeout: Unlimited 00:15:16.673 Command Set Identifier: NVM (00h) 00:15:16.673 Deallocate: Supported 00:15:16.673 Deallocated/Unwritten Error: Not Supported 00:15:16.673 Deallocated Read Value: Unknown 00:15:16.673 Deallocate in Write Zeroes: Not Supported 00:15:16.673 Deallocated Guard Field: 0xFFFF 00:15:16.673 Flush: Supported 00:15:16.673 Reservation: Supported 00:15:16.673 Namespace Sharing Capabilities: Multiple Controllers 00:15:16.673 Size (in LBAs): 131072 (0GiB) 00:15:16.673 Capacity (in LBAs): 131072 (0GiB) 00:15:16.673 Utilization (in LBAs): 131072 (0GiB) 00:15:16.673 NGUID: ABCDEF0123456789ABCDEF0123456789 00:15:16.674 EUI64: ABCDEF0123456789 00:15:16.674 UUID: 54d22671-d00e-4e1f-a490-7212bda64ea7 00:15:16.674 Thin Provisioning: Not Supported 00:15:16.674 Per-NS Atomic Units: Yes 00:15:16.674 Atomic Boundary Size (Normal): 0 00:15:16.674 Atomic Boundary Size (PFail): 0 00:15:16.674 Atomic Boundary Offset: 0 00:15:16.674 Maximum Single Source Range Length: 65535 00:15:16.674 Maximum Copy Length: 65535 00:15:16.674 Maximum Source Range Count: 1 00:15:16.674 NGUID/EUI64 Never Reused: No 00:15:16.674 Namespace Write Protected: No 00:15:16.674 Number of LBA Formats: 1 00:15:16.674 Current LBA Format: LBA Format #00 00:15:16.674 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:16.674 00:15:16.674 08:28:08 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:15:16.674 08:28:08 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:16.674 08:28:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.674 08:28:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:16.674 08:28:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.674 08:28:08 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:15:16.674 08:28:08 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:15:16.674 08:28:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:16.674 08:28:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:15:16.674 08:28:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:16.674 08:28:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:15:16.674 08:28:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:16.674 08:28:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:16.674 rmmod nvme_tcp 00:15:16.674 rmmod nvme_fabrics 00:15:16.674 rmmod nvme_keyring 00:15:16.674 08:28:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:16.674 08:28:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:15:16.674 08:28:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:15:16.674 08:28:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 74926 ']' 00:15:16.674 08:28:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 74926 00:15:16.674 08:28:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 74926 ']' 00:15:16.674 08:28:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 74926 00:15:16.674 08:28:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:15:16.674 08:28:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:16.674 08:28:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74926 00:15:16.674 killing process with pid 74926 00:15:16.674 08:28:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:16.674 08:28:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:16.674 08:28:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74926' 00:15:16.674 08:28:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 74926 00:15:16.674 08:28:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 74926 00:15:16.931 08:28:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:16.931 08:28:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:16.931 08:28:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:16.931 08:28:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:16.931 08:28:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:16.931 08:28:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:16.931 08:28:09 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:16.931 08:28:09 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:17.227 08:28:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:17.227 00:15:17.227 real 0m2.462s 00:15:17.227 user 0m6.665s 00:15:17.227 sys 0m0.653s 00:15:17.227 08:28:09 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:17.227 08:28:09 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:17.227 ************************************ 00:15:17.227 END TEST nvmf_identify 00:15:17.227 ************************************ 00:15:17.227 08:28:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:17.227 08:28:09 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:15:17.227 08:28:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:17.227 08:28:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:17.227 08:28:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:17.227 ************************************ 00:15:17.227 START TEST nvmf_perf 00:15:17.227 ************************************ 00:15:17.227 08:28:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:15:17.227 * Looking for test storage... 00:15:17.227 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:17.227 08:28:09 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:17.227 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:15:17.227 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:17.227 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:17.227 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:17.228 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:17.228 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:17.228 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:17.228 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:17.228 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:17.228 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:17.228 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:17.228 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:15:17.228 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:15:17.228 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:17.228 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:17.228 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:17.228 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:17.228 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:17.228 08:28:09 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:17.228 08:28:09 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:17.228 08:28:09 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:17.228 08:28:09 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.228 08:28:09 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.228 08:28:09 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.228 08:28:09 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:15:17.228 08:28:09 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.228 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:15:17.228 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:17.228 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:17.228 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:17.228 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:17.228 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:17.228 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:17.228 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:17.228 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:17.228 08:28:09 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:17.228 08:28:09 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:17.228 08:28:09 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:17.228 08:28:09 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:15:17.228 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:17.228 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:17.228 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:17.228 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:17.228 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:17.228 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:17.228 08:28:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:17.228 08:28:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:17.228 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:17.228 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:17.228 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:17.228 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:17.228 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:17.228 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:17.228 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:17.228 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:17.228 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:17.228 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:17.228 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:17.228 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:17.228 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:17.228 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:17.228 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:17.228 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:17.228 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:17.228 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:17.228 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:17.228 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:17.228 Cannot find device "nvmf_tgt_br" 00:15:17.228 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # true 00:15:17.228 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:17.228 Cannot find device "nvmf_tgt_br2" 00:15:17.228 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # true 00:15:17.228 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:17.228 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:17.228 Cannot find device "nvmf_tgt_br" 00:15:17.228 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # true 00:15:17.228 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:17.228 Cannot find device "nvmf_tgt_br2" 00:15:17.228 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # true 00:15:17.228 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:17.495 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:17.495 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:17.495 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:17.495 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # true 00:15:17.495 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:17.495 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:17.495 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # true 00:15:17.495 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:17.495 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:17.495 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:17.495 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:17.495 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:17.495 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:17.495 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:17.495 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:17.495 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:17.495 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:17.495 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:17.495 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:17.495 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:17.495 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:17.495 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:17.495 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:17.495 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:17.495 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:17.495 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:17.495 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:17.495 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:17.495 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:17.495 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:17.495 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:17.495 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:17.495 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:15:17.495 00:15:17.495 --- 10.0.0.2 ping statistics --- 00:15:17.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:17.495 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:15:17.495 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:17.495 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:17.495 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.090 ms 00:15:17.495 00:15:17.495 --- 10.0.0.3 ping statistics --- 00:15:17.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:17.495 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:15:17.495 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:17.495 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:17.495 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:15:17.495 00:15:17.495 --- 10.0.0.1 ping statistics --- 00:15:17.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:17.495 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:15:17.495 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:17.495 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@433 -- # return 0 00:15:17.495 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:17.495 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:17.495 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:17.495 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:17.495 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:17.495 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:17.495 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:17.495 08:28:09 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:15:17.495 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:17.495 08:28:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:17.495 08:28:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:17.495 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=75131 00:15:17.495 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:17.495 08:28:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 75131 00:15:17.495 08:28:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 75131 ']' 00:15:17.495 08:28:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:17.495 08:28:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:17.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:17.495 08:28:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:17.495 08:28:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:17.495 08:28:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:17.752 [2024-07-15 08:28:09.721617] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:15:17.752 [2024-07-15 08:28:09.721731] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:17.752 [2024-07-15 08:28:09.854680] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:18.009 [2024-07-15 08:28:10.007501] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:18.009 [2024-07-15 08:28:10.007601] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:18.009 [2024-07-15 08:28:10.007621] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:18.009 [2024-07-15 08:28:10.007635] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:18.009 [2024-07-15 08:28:10.007645] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:18.009 [2024-07-15 08:28:10.007757] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:18.009 [2024-07-15 08:28:10.008345] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:18.009 [2024-07-15 08:28:10.008435] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:18.009 [2024-07-15 08:28:10.008423] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:18.009 [2024-07-15 08:28:10.071408] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:18.945 08:28:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:18.945 08:28:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:15:18.945 08:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:18.945 08:28:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:18.945 08:28:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:18.945 08:28:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:18.945 08:28:10 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:18.945 08:28:10 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:15:19.510 08:28:11 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:15:19.510 08:28:11 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:15:19.768 08:28:11 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:15:19.768 08:28:11 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:20.334 08:28:12 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:15:20.334 08:28:12 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:15:20.334 08:28:12 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:15:20.334 08:28:12 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:15:20.334 08:28:12 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:20.334 [2024-07-15 08:28:12.496197] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:20.592 08:28:12 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:20.592 08:28:12 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:15:20.592 08:28:12 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:21.221 08:28:13 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:15:21.221 08:28:13 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:15:21.479 08:28:13 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:21.737 [2024-07-15 08:28:13.682847] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:21.737 08:28:13 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:21.995 08:28:14 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:15:21.995 08:28:14 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:15:21.995 08:28:14 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:15:21.995 08:28:14 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:15:23.370 Initializing NVMe Controllers 00:15:23.370 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:15:23.370 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:15:23.370 Initialization complete. Launching workers. 00:15:23.370 ======================================================== 00:15:23.370 Latency(us) 00:15:23.370 Device Information : IOPS MiB/s Average min max 00:15:23.370 PCIE (0000:00:10.0) NSID 1 from core 0: 24150.87 94.34 1325.26 305.28 7010.61 00:15:23.370 ======================================================== 00:15:23.370 Total : 24150.87 94.34 1325.26 305.28 7010.61 00:15:23.370 00:15:23.370 08:28:15 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:24.305 Initializing NVMe Controllers 00:15:24.305 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:24.305 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:24.305 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:24.305 Initialization complete. Launching workers. 00:15:24.305 ======================================================== 00:15:24.305 Latency(us) 00:15:24.305 Device Information : IOPS MiB/s Average min max 00:15:24.305 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3654.98 14.28 272.20 105.03 7136.50 00:15:24.305 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 124.00 0.48 8103.68 4990.65 12040.55 00:15:24.306 ======================================================== 00:15:24.306 Total : 3778.98 14.76 529.17 105.03 12040.55 00:15:24.306 00:15:24.564 08:28:16 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:25.982 Initializing NVMe Controllers 00:15:25.982 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:25.982 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:25.982 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:25.982 Initialization complete. Launching workers. 00:15:25.982 ======================================================== 00:15:25.982 Latency(us) 00:15:25.982 Device Information : IOPS MiB/s Average min max 00:15:25.982 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8105.58 31.66 3948.71 681.21 10597.24 00:15:25.982 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3929.31 15.35 8175.96 6088.71 15838.82 00:15:25.982 ======================================================== 00:15:25.982 Total : 12034.89 47.01 5328.88 681.21 15838.82 00:15:25.982 00:15:25.982 08:28:17 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:15:25.982 08:28:17 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:28.511 Initializing NVMe Controllers 00:15:28.511 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:28.511 Controller IO queue size 128, less than required. 00:15:28.511 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:28.511 Controller IO queue size 128, less than required. 00:15:28.511 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:28.511 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:28.511 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:28.511 Initialization complete. Launching workers. 00:15:28.511 ======================================================== 00:15:28.511 Latency(us) 00:15:28.511 Device Information : IOPS MiB/s Average min max 00:15:28.511 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1611.79 402.95 81593.48 48324.81 186434.97 00:15:28.511 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 638.92 159.73 209760.85 77550.93 343548.70 00:15:28.511 ======================================================== 00:15:28.511 Total : 2250.70 562.68 117976.84 48324.81 343548.70 00:15:28.511 00:15:28.511 08:28:20 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:15:28.769 Initializing NVMe Controllers 00:15:28.769 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:28.769 Controller IO queue size 128, less than required. 00:15:28.769 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:28.769 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:15:28.769 Controller IO queue size 128, less than required. 00:15:28.769 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:28.769 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:15:28.769 WARNING: Some requested NVMe devices were skipped 00:15:28.769 No valid NVMe controllers or AIO or URING devices found 00:15:28.769 08:28:20 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:15:31.298 Initializing NVMe Controllers 00:15:31.298 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:31.298 Controller IO queue size 128, less than required. 00:15:31.298 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:31.298 Controller IO queue size 128, less than required. 00:15:31.298 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:31.298 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:31.298 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:31.298 Initialization complete. Launching workers. 00:15:31.298 00:15:31.298 ==================== 00:15:31.298 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:15:31.298 TCP transport: 00:15:31.298 polls: 9440 00:15:31.298 idle_polls: 6106 00:15:31.298 sock_completions: 3334 00:15:31.298 nvme_completions: 6113 00:15:31.298 submitted_requests: 9116 00:15:31.298 queued_requests: 1 00:15:31.298 00:15:31.298 ==================== 00:15:31.298 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:15:31.298 TCP transport: 00:15:31.298 polls: 9767 00:15:31.298 idle_polls: 5708 00:15:31.298 sock_completions: 4059 00:15:31.298 nvme_completions: 6731 00:15:31.298 submitted_requests: 10134 00:15:31.298 queued_requests: 1 00:15:31.298 ======================================================== 00:15:31.298 Latency(us) 00:15:31.298 Device Information : IOPS MiB/s Average min max 00:15:31.298 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1524.56 381.14 85364.37 43516.36 131142.40 00:15:31.298 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1678.71 419.68 77170.14 40170.22 125988.37 00:15:31.298 ======================================================== 00:15:31.298 Total : 3203.28 800.82 81070.09 40170.22 131142.40 00:15:31.298 00:15:31.298 08:28:23 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:15:31.298 08:28:23 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:31.556 08:28:23 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:15:31.556 08:28:23 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:15:31.556 08:28:23 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:15:31.556 08:28:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:31.556 08:28:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:15:31.556 08:28:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:31.556 08:28:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:15:31.556 08:28:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:31.556 08:28:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:31.556 rmmod nvme_tcp 00:15:31.556 rmmod nvme_fabrics 00:15:31.556 rmmod nvme_keyring 00:15:31.556 08:28:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:31.556 08:28:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:15:31.556 08:28:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:15:31.557 08:28:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 75131 ']' 00:15:31.557 08:28:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 75131 00:15:31.557 08:28:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 75131 ']' 00:15:31.557 08:28:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 75131 00:15:31.557 08:28:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:15:31.557 08:28:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:31.557 08:28:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75131 00:15:31.557 killing process with pid 75131 00:15:31.557 08:28:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:31.557 08:28:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:31.557 08:28:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75131' 00:15:31.557 08:28:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 75131 00:15:31.557 08:28:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 75131 00:15:32.558 08:28:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:32.558 08:28:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:32.558 08:28:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:32.558 08:28:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:32.558 08:28:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:32.558 08:28:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:32.558 08:28:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:32.558 08:28:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:32.558 08:28:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:32.558 00:15:32.558 real 0m15.287s 00:15:32.558 user 0m56.959s 00:15:32.558 sys 0m4.300s 00:15:32.558 08:28:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:32.558 08:28:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:32.558 ************************************ 00:15:32.558 END TEST nvmf_perf 00:15:32.558 ************************************ 00:15:32.558 08:28:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:32.558 08:28:24 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:15:32.558 08:28:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:32.558 08:28:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:32.558 08:28:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:32.558 ************************************ 00:15:32.558 START TEST nvmf_fio_host 00:15:32.558 ************************************ 00:15:32.558 08:28:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:15:32.558 * Looking for test storage... 00:15:32.558 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:32.558 08:28:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:32.558 08:28:24 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:32.558 08:28:24 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:32.558 08:28:24 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:32.558 08:28:24 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.558 08:28:24 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.558 08:28:24 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.558 08:28:24 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:15:32.558 08:28:24 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.558 08:28:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:32.558 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:15:32.558 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:32.558 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:32.558 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:32.558 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:32.558 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:32.558 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:32.558 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:32.558 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:32.558 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:32.558 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:32.558 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:15:32.558 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:15:32.558 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:32.558 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:32.558 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:32.558 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:32.558 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:32.558 08:28:24 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:32.558 08:28:24 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:32.558 08:28:24 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:32.558 08:28:24 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.559 08:28:24 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.559 08:28:24 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.559 08:28:24 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:15:32.559 08:28:24 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.559 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:15:32.559 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:32.559 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:32.559 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:32.559 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:32.559 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:32.559 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:32.559 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:32.559 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:32.559 08:28:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:32.559 08:28:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:15:32.559 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:32.559 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:32.559 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:32.559 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:32.559 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:32.559 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:32.559 08:28:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:32.559 08:28:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:32.559 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:32.559 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:32.559 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:32.559 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:32.559 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:32.559 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:32.559 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:32.559 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:32.559 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:32.559 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:32.559 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:32.559 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:32.559 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:32.559 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:32.559 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:32.559 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:32.559 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:32.559 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:32.559 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:32.559 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:32.559 Cannot find device "nvmf_tgt_br" 00:15:32.559 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # true 00:15:32.559 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:32.559 Cannot find device "nvmf_tgt_br2" 00:15:32.559 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # true 00:15:32.559 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:32.559 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:32.559 Cannot find device "nvmf_tgt_br" 00:15:32.559 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # true 00:15:32.559 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:32.559 Cannot find device "nvmf_tgt_br2" 00:15:32.559 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # true 00:15:32.559 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:32.559 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:32.818 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:32.818 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:32.818 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:15:32.818 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:32.818 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:32.818 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:15:32.818 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:32.818 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:32.818 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:32.818 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:32.818 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:32.818 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:32.818 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:32.818 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:32.818 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:32.818 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:32.818 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:32.818 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:32.819 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:32.819 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:32.819 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:32.819 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:32.819 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:32.819 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:32.819 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:32.819 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:32.819 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:32.819 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:32.819 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:32.819 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:32.819 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:32.819 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.167 ms 00:15:32.819 00:15:32.819 --- 10.0.0.2 ping statistics --- 00:15:32.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:32.819 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:15:32.819 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:32.819 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:32.819 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:15:32.819 00:15:32.819 --- 10.0.0.3 ping statistics --- 00:15:32.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:32.819 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:15:32.819 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:32.819 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:32.819 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:15:32.819 00:15:32.819 --- 10.0.0.1 ping statistics --- 00:15:32.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:32.819 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:15:32.819 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:32.819 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@433 -- # return 0 00:15:32.819 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:32.819 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:32.819 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:32.819 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:32.819 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:32.819 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:32.819 08:28:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:33.077 08:28:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:15:33.077 08:28:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:15:33.078 08:28:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:33.078 08:28:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:33.078 08:28:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=75545 00:15:33.078 08:28:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:33.078 08:28:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:33.078 08:28:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 75545 00:15:33.078 08:28:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 75545 ']' 00:15:33.078 08:28:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:33.078 08:28:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:33.078 08:28:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:33.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:33.078 08:28:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:33.078 08:28:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:33.078 [2024-07-15 08:28:25.072496] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:15:33.078 [2024-07-15 08:28:25.072606] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:33.078 [2024-07-15 08:28:25.217395] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:33.336 [2024-07-15 08:28:25.340604] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:33.336 [2024-07-15 08:28:25.340669] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:33.336 [2024-07-15 08:28:25.340682] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:33.336 [2024-07-15 08:28:25.340690] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:33.336 [2024-07-15 08:28:25.340698] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:33.336 [2024-07-15 08:28:25.340844] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:33.336 [2024-07-15 08:28:25.341018] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:33.336 [2024-07-15 08:28:25.341557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:33.336 [2024-07-15 08:28:25.341592] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:33.336 [2024-07-15 08:28:25.394102] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:34.272 08:28:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:34.272 08:28:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:15:34.272 08:28:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:34.272 [2024-07-15 08:28:26.343233] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:34.272 08:28:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:15:34.272 08:28:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:34.272 08:28:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:34.272 08:28:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:34.530 Malloc1 00:15:34.530 08:28:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:34.788 08:28:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:35.047 08:28:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:35.305 [2024-07-15 08:28:27.358256] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:35.305 08:28:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:35.563 08:28:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:15:35.563 08:28:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:35.563 08:28:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:35.563 08:28:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:15:35.563 08:28:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:35.563 08:28:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:15:35.563 08:28:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:35.563 08:28:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:15:35.563 08:28:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:15:35.563 08:28:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:35.563 08:28:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:15:35.563 08:28:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:35.563 08:28:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:35.563 08:28:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:15:35.563 08:28:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:15:35.563 08:28:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:35.563 08:28:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:35.563 08:28:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:15:35.563 08:28:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:35.563 08:28:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:15:35.563 08:28:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:15:35.563 08:28:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:35.563 08:28:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:35.821 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:15:35.821 fio-3.35 00:15:35.821 Starting 1 thread 00:15:38.364 00:15:38.364 test: (groupid=0, jobs=1): err= 0: pid=75628: Mon Jul 15 08:28:30 2024 00:15:38.364 read: IOPS=8212, BW=32.1MiB/s (33.6MB/s)(64.4MiB/2007msec) 00:15:38.364 slat (usec): min=2, max=328, avg= 2.74, stdev= 3.75 00:15:38.364 clat (usec): min=2629, max=15283, avg=8108.92, stdev=621.98 00:15:38.364 lat (usec): min=2678, max=15286, avg=8111.66, stdev=621.68 00:15:38.364 clat percentiles (usec): 00:15:38.364 | 1.00th=[ 6849], 5.00th=[ 7242], 10.00th=[ 7439], 20.00th=[ 7701], 00:15:38.364 | 30.00th=[ 7832], 40.00th=[ 7963], 50.00th=[ 8094], 60.00th=[ 8225], 00:15:38.364 | 70.00th=[ 8356], 80.00th=[ 8586], 90.00th=[ 8717], 95.00th=[ 8979], 00:15:38.364 | 99.00th=[ 9503], 99.50th=[10552], 99.90th=[13042], 99.95th=[13829], 00:15:38.364 | 99.99th=[15139] 00:15:38.364 bw ( KiB/s): min=32240, max=33152, per=99.90%, avg=32819.50, stdev=398.89, samples=4 00:15:38.364 iops : min= 8060, max= 8288, avg=8204.75, stdev=99.66, samples=4 00:15:38.364 write: IOPS=8218, BW=32.1MiB/s (33.7MB/s)(64.4MiB/2007msec); 0 zone resets 00:15:38.364 slat (usec): min=2, max=281, avg= 2.80, stdev= 2.72 00:15:38.364 clat (usec): min=2467, max=15025, avg=7412.35, stdev=581.60 00:15:38.364 lat (usec): min=2481, max=15028, avg=7415.15, stdev=581.44 00:15:38.364 clat percentiles (usec): 00:15:38.364 | 1.00th=[ 6128], 5.00th=[ 6652], 10.00th=[ 6849], 20.00th=[ 7046], 00:15:38.364 | 30.00th=[ 7177], 40.00th=[ 7308], 50.00th=[ 7373], 60.00th=[ 7504], 00:15:38.364 | 70.00th=[ 7635], 80.00th=[ 7767], 90.00th=[ 8029], 95.00th=[ 8225], 00:15:38.364 | 99.00th=[ 8717], 99.50th=[ 9896], 99.90th=[12649], 99.95th=[13173], 00:15:38.364 | 99.99th=[15008] 00:15:38.364 bw ( KiB/s): min=32192, max=33341, per=99.92%, avg=32849.25, stdev=481.92, samples=4 00:15:38.364 iops : min= 8048, max= 8335, avg=8212.25, stdev=120.39, samples=4 00:15:38.364 lat (msec) : 4=0.08%, 10=99.38%, 20=0.54% 00:15:38.364 cpu : usr=68.05%, sys=23.33%, ctx=17, majf=0, minf=7 00:15:38.364 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:15:38.364 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:38.364 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:38.364 issued rwts: total=16483,16495,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:38.364 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:38.364 00:15:38.364 Run status group 0 (all jobs): 00:15:38.364 READ: bw=32.1MiB/s (33.6MB/s), 32.1MiB/s-32.1MiB/s (33.6MB/s-33.6MB/s), io=64.4MiB (67.5MB), run=2007-2007msec 00:15:38.364 WRITE: bw=32.1MiB/s (33.7MB/s), 32.1MiB/s-32.1MiB/s (33.7MB/s-33.7MB/s), io=64.4MiB (67.6MB), run=2007-2007msec 00:15:38.364 08:28:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:15:38.364 08:28:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:15:38.364 08:28:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:15:38.364 08:28:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:38.364 08:28:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:15:38.364 08:28:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:38.364 08:28:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:15:38.364 08:28:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:15:38.364 08:28:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:38.364 08:28:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:38.364 08:28:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:15:38.364 08:28:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:38.364 08:28:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:15:38.364 08:28:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:15:38.364 08:28:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:38.364 08:28:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:38.365 08:28:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:15:38.365 08:28:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:38.365 08:28:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:15:38.365 08:28:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:15:38.365 08:28:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:38.365 08:28:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:15:38.365 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:15:38.365 fio-3.35 00:15:38.365 Starting 1 thread 00:15:40.898 00:15:40.898 test: (groupid=0, jobs=1): err= 0: pid=75671: Mon Jul 15 08:28:32 2024 00:15:40.898 read: IOPS=7350, BW=115MiB/s (120MB/s)(231MiB/2008msec) 00:15:40.898 slat (usec): min=3, max=121, avg= 4.13, stdev= 2.12 00:15:40.898 clat (usec): min=2138, max=20019, avg=9643.36, stdev=2913.86 00:15:40.898 lat (usec): min=2142, max=20022, avg=9647.48, stdev=2913.92 00:15:40.898 clat percentiles (usec): 00:15:40.898 | 1.00th=[ 4490], 5.00th=[ 5407], 10.00th=[ 5997], 20.00th=[ 7111], 00:15:40.899 | 30.00th=[ 7963], 40.00th=[ 8717], 50.00th=[ 9372], 60.00th=[10028], 00:15:40.899 | 70.00th=[10945], 80.00th=[11863], 90.00th=[13435], 95.00th=[15401], 00:15:40.899 | 99.00th=[17433], 99.50th=[17957], 99.90th=[19006], 99.95th=[19006], 00:15:40.899 | 99.99th=[19530] 00:15:40.899 bw ( KiB/s): min=51168, max=67936, per=50.34%, avg=59200.00, stdev=7071.06, samples=4 00:15:40.899 iops : min= 3198, max= 4246, avg=3700.00, stdev=441.94, samples=4 00:15:40.899 write: IOPS=4169, BW=65.2MiB/s (68.3MB/s)(121MiB/1855msec); 0 zone resets 00:15:40.899 slat (usec): min=35, max=377, avg=39.50, stdev= 7.95 00:15:40.899 clat (usec): min=7066, max=27068, avg=13840.24, stdev=2492.55 00:15:40.899 lat (usec): min=7103, max=27113, avg=13879.75, stdev=2492.46 00:15:40.899 clat percentiles (usec): 00:15:40.899 | 1.00th=[ 9110], 5.00th=[10290], 10.00th=[10814], 20.00th=[11600], 00:15:40.899 | 30.00th=[12256], 40.00th=[12911], 50.00th=[13566], 60.00th=[14353], 00:15:40.899 | 70.00th=[15139], 80.00th=[16057], 90.00th=[17171], 95.00th=[17957], 00:15:40.899 | 99.00th=[20055], 99.50th=[20317], 99.90th=[25035], 99.95th=[26870], 00:15:40.899 | 99.99th=[27132] 00:15:40.899 bw ( KiB/s): min=52288, max=72480, per=92.01%, avg=61384.00, stdev=8701.11, samples=4 00:15:40.899 iops : min= 3268, max= 4530, avg=3836.50, stdev=543.82, samples=4 00:15:40.899 lat (msec) : 4=0.17%, 10=39.68%, 20=59.77%, 50=0.38% 00:15:40.899 cpu : usr=78.53%, sys=16.04%, ctx=6, majf=0, minf=4 00:15:40.899 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:15:40.899 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:40.899 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:40.899 issued rwts: total=14759,7735,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:40.899 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:40.899 00:15:40.899 Run status group 0 (all jobs): 00:15:40.899 READ: bw=115MiB/s (120MB/s), 115MiB/s-115MiB/s (120MB/s-120MB/s), io=231MiB (242MB), run=2008-2008msec 00:15:40.899 WRITE: bw=65.2MiB/s (68.3MB/s), 65.2MiB/s-65.2MiB/s (68.3MB/s-68.3MB/s), io=121MiB (127MB), run=1855-1855msec 00:15:40.899 08:28:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:40.899 08:28:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:15:40.899 08:28:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:15:40.899 08:28:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:15:40.899 08:28:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:15:40.899 08:28:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:40.899 08:28:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:15:40.899 08:28:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:40.899 08:28:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:15:40.899 08:28:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:40.899 08:28:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:40.899 rmmod nvme_tcp 00:15:40.899 rmmod nvme_fabrics 00:15:40.899 rmmod nvme_keyring 00:15:40.899 08:28:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:40.899 08:28:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:15:40.899 08:28:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:15:40.899 08:28:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 75545 ']' 00:15:40.899 08:28:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 75545 00:15:40.899 08:28:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 75545 ']' 00:15:40.899 08:28:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 75545 00:15:40.899 08:28:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:15:40.899 08:28:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:40.899 08:28:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75545 00:15:40.899 08:28:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:40.899 killing process with pid 75545 00:15:40.899 08:28:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:40.899 08:28:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75545' 00:15:40.899 08:28:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 75545 00:15:40.899 08:28:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 75545 00:15:41.162 08:28:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:41.162 08:28:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:41.162 08:28:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:41.163 08:28:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:41.163 08:28:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:41.163 08:28:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:41.163 08:28:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:41.163 08:28:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:41.163 08:28:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:41.163 00:15:41.163 real 0m8.789s 00:15:41.163 user 0m35.753s 00:15:41.163 sys 0m2.348s 00:15:41.163 08:28:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:41.163 08:28:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:41.163 ************************************ 00:15:41.163 END TEST nvmf_fio_host 00:15:41.163 ************************************ 00:15:41.421 08:28:33 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:41.421 08:28:33 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:15:41.421 08:28:33 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:41.421 08:28:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:41.421 08:28:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:41.421 ************************************ 00:15:41.421 START TEST nvmf_failover 00:15:41.421 ************************************ 00:15:41.421 08:28:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:15:41.421 * Looking for test storage... 00:15:41.421 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:41.421 08:28:33 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:41.421 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:15:41.421 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:41.421 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:41.421 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:41.421 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:41.421 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:41.421 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:41.421 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:41.421 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:41.421 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:41.421 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:41.421 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:15:41.421 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:15:41.421 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:41.421 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:41.421 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:41.421 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:41.421 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:41.421 08:28:33 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:41.421 08:28:33 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:41.421 08:28:33 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:41.421 08:28:33 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.421 08:28:33 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.421 08:28:33 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.421 08:28:33 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:15:41.421 08:28:33 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.421 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:15:41.421 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:41.421 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:41.421 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:41.422 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:41.422 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:41.422 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:41.422 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:41.422 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:41.422 08:28:33 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:41.422 08:28:33 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:41.422 08:28:33 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:41.422 08:28:33 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:41.422 08:28:33 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:15:41.422 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:41.422 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:41.422 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:41.422 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:41.422 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:41.422 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:41.422 08:28:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:41.422 08:28:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:41.422 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:41.422 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:41.422 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:41.422 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:41.422 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:41.422 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:41.422 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:41.422 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:41.422 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:41.422 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:41.422 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:41.422 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:41.422 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:41.422 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:41.422 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:41.422 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:41.422 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:41.422 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:41.422 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:41.422 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:41.422 Cannot find device "nvmf_tgt_br" 00:15:41.422 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # true 00:15:41.422 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:41.422 Cannot find device "nvmf_tgt_br2" 00:15:41.422 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # true 00:15:41.422 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:41.422 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:41.422 Cannot find device "nvmf_tgt_br" 00:15:41.422 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # true 00:15:41.422 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:41.422 Cannot find device "nvmf_tgt_br2" 00:15:41.422 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # true 00:15:41.422 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:41.422 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:41.422 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:41.422 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:41.422 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # true 00:15:41.422 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:41.422 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:41.422 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # true 00:15:41.422 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:41.679 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:41.679 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:41.679 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:41.679 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:41.679 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:41.679 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:41.679 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:41.679 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:41.679 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:41.679 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:41.679 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:41.679 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:41.679 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:41.679 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:41.679 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:41.679 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:41.680 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:41.680 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:41.680 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:41.680 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:41.680 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:41.680 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:41.680 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:41.680 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:41.680 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.119 ms 00:15:41.680 00:15:41.680 --- 10.0.0.2 ping statistics --- 00:15:41.680 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:41.680 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:15:41.680 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:41.680 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:41.680 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:15:41.680 00:15:41.680 --- 10.0.0.3 ping statistics --- 00:15:41.680 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:41.680 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:15:41.680 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:41.680 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:41.680 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:15:41.680 00:15:41.680 --- 10.0.0.1 ping statistics --- 00:15:41.680 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:41.680 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:15:41.680 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:41.680 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@433 -- # return 0 00:15:41.680 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:41.680 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:41.680 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:41.680 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:41.680 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:41.680 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:41.680 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:41.680 08:28:33 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:15:41.680 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:41.680 08:28:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:41.680 08:28:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:41.680 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=75889 00:15:41.680 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 75889 00:15:41.680 08:28:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 75889 ']' 00:15:41.680 08:28:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:41.680 08:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:41.680 08:28:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:41.680 08:28:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:41.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:41.680 08:28:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:41.680 08:28:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:41.937 [2024-07-15 08:28:33.871819] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:15:41.937 [2024-07-15 08:28:33.871910] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:41.937 [2024-07-15 08:28:34.013819] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:42.194 [2024-07-15 08:28:34.170661] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:42.194 [2024-07-15 08:28:34.170763] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:42.194 [2024-07-15 08:28:34.170780] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:42.194 [2024-07-15 08:28:34.170791] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:42.194 [2024-07-15 08:28:34.170800] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:42.194 [2024-07-15 08:28:34.171520] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:42.194 [2024-07-15 08:28:34.171639] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:42.194 [2024-07-15 08:28:34.171646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:42.194 [2024-07-15 08:28:34.245756] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:42.758 08:28:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:42.758 08:28:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:15:42.758 08:28:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:42.758 08:28:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:42.758 08:28:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:42.758 08:28:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:42.758 08:28:34 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:43.324 [2024-07-15 08:28:35.203036] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:43.324 08:28:35 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:15:43.581 Malloc0 00:15:43.581 08:28:35 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:43.838 08:28:35 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:44.095 08:28:36 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:44.362 [2024-07-15 08:28:36.317860] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:44.363 08:28:36 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:44.634 [2024-07-15 08:28:36.554165] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:44.634 08:28:36 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:15:44.634 [2024-07-15 08:28:36.786541] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:15:44.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:44.634 08:28:36 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:15:44.634 08:28:36 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=75947 00:15:44.634 08:28:36 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:44.634 08:28:36 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 75947 /var/tmp/bdevperf.sock 00:15:44.634 08:28:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 75947 ']' 00:15:44.634 08:28:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:44.634 08:28:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:44.634 08:28:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:44.634 08:28:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:44.634 08:28:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:46.009 08:28:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:46.009 08:28:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:15:46.009 08:28:37 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:46.009 NVMe0n1 00:15:46.267 08:28:38 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:46.525 00:15:46.525 08:28:38 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=75976 00:15:46.525 08:28:38 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:46.525 08:28:38 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:15:47.900 08:28:39 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:47.900 [2024-07-15 08:28:39.972186] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.900 [2024-07-15 08:28:39.972275] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.900 [2024-07-15 08:28:39.972287] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.900 [2024-07-15 08:28:39.972297] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.900 [2024-07-15 08:28:39.972306] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.900 [2024-07-15 08:28:39.972316] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.900 [2024-07-15 08:28:39.972325] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.900 [2024-07-15 08:28:39.972334] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.900 [2024-07-15 08:28:39.972343] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.900 [2024-07-15 08:28:39.972352] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.900 [2024-07-15 08:28:39.972361] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.900 [2024-07-15 08:28:39.972370] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.900 [2024-07-15 08:28:39.972380] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.900 [2024-07-15 08:28:39.972389] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.900 [2024-07-15 08:28:39.972398] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.900 [2024-07-15 08:28:39.972406] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.900 [2024-07-15 08:28:39.972415] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.900 [2024-07-15 08:28:39.972424] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.900 [2024-07-15 08:28:39.972432] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.900 [2024-07-15 08:28:39.972440] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.900 [2024-07-15 08:28:39.972449] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.900 [2024-07-15 08:28:39.972457] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.900 [2024-07-15 08:28:39.972468] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.900 [2024-07-15 08:28:39.972477] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.900 [2024-07-15 08:28:39.972485] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.900 [2024-07-15 08:28:39.972494] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.900 [2024-07-15 08:28:39.972503] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.900 [2024-07-15 08:28:39.972512] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.900 [2024-07-15 08:28:39.972521] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.900 [2024-07-15 08:28:39.972529] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.900 [2024-07-15 08:28:39.972538] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.901 [2024-07-15 08:28:39.972546] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.901 [2024-07-15 08:28:39.972554] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.901 [2024-07-15 08:28:39.972562] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.901 [2024-07-15 08:28:39.972571] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.901 [2024-07-15 08:28:39.972579] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.901 [2024-07-15 08:28:39.972587] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.901 [2024-07-15 08:28:39.972596] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.901 [2024-07-15 08:28:39.972604] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.901 [2024-07-15 08:28:39.972612] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.901 [2024-07-15 08:28:39.972620] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.901 [2024-07-15 08:28:39.972629] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.901 [2024-07-15 08:28:39.972637] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.901 [2024-07-15 08:28:39.972647] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.901 [2024-07-15 08:28:39.972656] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.901 [2024-07-15 08:28:39.972664] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.901 [2024-07-15 08:28:39.972674] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.901 [2024-07-15 08:28:39.972684] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.901 [2024-07-15 08:28:39.972693] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.901 [2024-07-15 08:28:39.972702] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.901 [2024-07-15 08:28:39.972710] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.901 [2024-07-15 08:28:39.972732] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.901 [2024-07-15 08:28:39.972742] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.901 [2024-07-15 08:28:39.972751] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.901 [2024-07-15 08:28:39.972761] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.901 [2024-07-15 08:28:39.972770] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.901 [2024-07-15 08:28:39.972779] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.901 [2024-07-15 08:28:39.972788] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.901 [2024-07-15 08:28:39.972796] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.901 [2024-07-15 08:28:39.972805] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.901 [2024-07-15 08:28:39.972814] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.901 [2024-07-15 08:28:39.972823] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.901 [2024-07-15 08:28:39.972832] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.901 [2024-07-15 08:28:39.972841] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.901 [2024-07-15 08:28:39.972850] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.901 [2024-07-15 08:28:39.972859] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.901 [2024-07-15 08:28:39.972867] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.901 [2024-07-15 08:28:39.972876] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.901 [2024-07-15 08:28:39.972884] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.901 [2024-07-15 08:28:39.972892] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.901 [2024-07-15 08:28:39.972901] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.901 [2024-07-15 08:28:39.972909] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.901 [2024-07-15 08:28:39.972917] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.901 [2024-07-15 08:28:39.972925] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.901 [2024-07-15 08:28:39.972933] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.901 [2024-07-15 08:28:39.972941] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.901 [2024-07-15 08:28:39.972949] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.901 [2024-07-15 08:28:39.972957] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.901 [2024-07-15 08:28:39.972966] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.901 [2024-07-15 08:28:39.972974] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.901 [2024-07-15 08:28:39.972982] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.901 [2024-07-15 08:28:39.972990] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.901 [2024-07-15 08:28:39.972998] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.901 [2024-07-15 08:28:39.973006] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.901 [2024-07-15 08:28:39.973015] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.901 [2024-07-15 08:28:39.973023] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.901 [2024-07-15 08:28:39.973031] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.901 [2024-07-15 08:28:39.973039] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.901 [2024-07-15 08:28:39.973048] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.901 [2024-07-15 08:28:39.973056] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.901 [2024-07-15 08:28:39.973064] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.901 [2024-07-15 08:28:39.973072] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.901 [2024-07-15 08:28:39.973080] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.901 [2024-07-15 08:28:39.973091] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.901 [2024-07-15 08:28:39.973100] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.901 [2024-07-15 08:28:39.973109] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.901 [2024-07-15 08:28:39.973118] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.901 [2024-07-15 08:28:39.973128] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.901 [2024-07-15 08:28:39.973137] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.901 [2024-07-15 08:28:39.973146] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.901 [2024-07-15 08:28:39.973155] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431950 is same with the state(5) to be set 00:15:47.901 08:28:39 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:15:51.185 08:28:43 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:51.185 00:15:51.444 08:28:43 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:51.702 08:28:43 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:15:55.003 08:28:46 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:55.003 [2024-07-15 08:28:46.896563] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:55.003 08:28:46 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:15:55.937 08:28:47 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:15:56.196 [2024-07-15 08:28:48.188202] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14309b0 is same with the state(5) to be set 00:15:56.196 [2024-07-15 08:28:48.188276] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14309b0 is same with the state(5) to be set 00:15:56.196 [2024-07-15 08:28:48.188290] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14309b0 is same with the state(5) to be set 00:15:56.196 08:28:48 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 75976 00:16:02.774 0 00:16:02.774 08:28:53 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 75947 00:16:02.774 08:28:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 75947 ']' 00:16:02.774 08:28:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 75947 00:16:02.774 08:28:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:16:02.774 08:28:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:02.774 08:28:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75947 00:16:02.774 killing process with pid 75947 00:16:02.774 08:28:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:02.774 08:28:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:02.774 08:28:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75947' 00:16:02.774 08:28:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 75947 00:16:02.774 08:28:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 75947 00:16:02.774 08:28:54 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:02.774 [2024-07-15 08:28:36.847826] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:02.774 [2024-07-15 08:28:36.847943] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75947 ] 00:16:02.774 [2024-07-15 08:28:36.985706] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:02.774 [2024-07-15 08:28:37.103914] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:02.774 [2024-07-15 08:28:37.157195] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:02.774 Running I/O for 15 seconds... 00:16:02.774 [2024-07-15 08:28:39.972574] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.774 [2024-07-15 08:28:39.972633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.774 [2024-07-15 08:28:39.972651] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.774 [2024-07-15 08:28:39.972666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.774 [2024-07-15 08:28:39.972687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.774 [2024-07-15 08:28:39.972702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.774 [2024-07-15 08:28:39.972730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.774 [2024-07-15 08:28:39.972747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.774 [2024-07-15 08:28:39.972762] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68570 is same with the state(5) to be set 00:16:02.774 [2024-07-15 08:28:39.973215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:58768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.774 [2024-07-15 08:28:39.973240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.774 [2024-07-15 08:28:39.973274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:58776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.774 [2024-07-15 08:28:39.973289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.774 [2024-07-15 08:28:39.973305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:58784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.774 [2024-07-15 08:28:39.973319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.774 [2024-07-15 08:28:39.973334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:58792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.774 [2024-07-15 08:28:39.973348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.774 [2024-07-15 08:28:39.973364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:58800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.774 [2024-07-15 08:28:39.973378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.774 [2024-07-15 08:28:39.973393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:58808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.774 [2024-07-15 08:28:39.973407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.774 [2024-07-15 08:28:39.973452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:58816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.774 [2024-07-15 08:28:39.973467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.774 [2024-07-15 08:28:39.973483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:58824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.774 [2024-07-15 08:28:39.973497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.774 [2024-07-15 08:28:39.973512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:58832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.774 [2024-07-15 08:28:39.973526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.774 [2024-07-15 08:28:39.973541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:58840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.774 [2024-07-15 08:28:39.973555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.774 [2024-07-15 08:28:39.973570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:58848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.774 [2024-07-15 08:28:39.973584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.774 [2024-07-15 08:28:39.973599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:58856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.774 [2024-07-15 08:28:39.973613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.774 [2024-07-15 08:28:39.973637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:58864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.774 [2024-07-15 08:28:39.973651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.774 [2024-07-15 08:28:39.973667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:58872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.774 [2024-07-15 08:28:39.973680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.774 [2024-07-15 08:28:39.973696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:58880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.774 [2024-07-15 08:28:39.973710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.774 [2024-07-15 08:28:39.973741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:58888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.774 [2024-07-15 08:28:39.973756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.774 [2024-07-15 08:28:39.973771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:58896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.774 [2024-07-15 08:28:39.973785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.774 [2024-07-15 08:28:39.973800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:58904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.774 [2024-07-15 08:28:39.973814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.774 [2024-07-15 08:28:39.973829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:58912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.774 [2024-07-15 08:28:39.973851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.774 [2024-07-15 08:28:39.973868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:58920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.774 [2024-07-15 08:28:39.973882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.774 [2024-07-15 08:28:39.973897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:58928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.774 [2024-07-15 08:28:39.973911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.774 [2024-07-15 08:28:39.973926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:58936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.774 [2024-07-15 08:28:39.973940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.774 [2024-07-15 08:28:39.973955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:58944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.774 [2024-07-15 08:28:39.973968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.774 [2024-07-15 08:28:39.973984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:58952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.774 [2024-07-15 08:28:39.973997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.774 [2024-07-15 08:28:39.974012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:58960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.774 [2024-07-15 08:28:39.974026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.774 [2024-07-15 08:28:39.974041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:58968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.774 [2024-07-15 08:28:39.974054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.774 [2024-07-15 08:28:39.974069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:58976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.774 [2024-07-15 08:28:39.974083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.774 [2024-07-15 08:28:39.974099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:58984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.774 [2024-07-15 08:28:39.974113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.774 [2024-07-15 08:28:39.974134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:58992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.774 [2024-07-15 08:28:39.974148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.774 [2024-07-15 08:28:39.974163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:59000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.774 [2024-07-15 08:28:39.974177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.774 [2024-07-15 08:28:39.974192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:59008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.774 [2024-07-15 08:28:39.974206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.774 [2024-07-15 08:28:39.974222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:59016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.774 [2024-07-15 08:28:39.974242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.775 [2024-07-15 08:28:39.974259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:59024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.775 [2024-07-15 08:28:39.974273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.775 [2024-07-15 08:28:39.974288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:59032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.775 [2024-07-15 08:28:39.974301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.775 [2024-07-15 08:28:39.974317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:59040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.775 [2024-07-15 08:28:39.974330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.775 [2024-07-15 08:28:39.974345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:59048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.775 [2024-07-15 08:28:39.974359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.775 [2024-07-15 08:28:39.974382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:59056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.775 [2024-07-15 08:28:39.974396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.775 [2024-07-15 08:28:39.974411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:59064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.775 [2024-07-15 08:28:39.974424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.775 [2024-07-15 08:28:39.974440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:59072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.775 [2024-07-15 08:28:39.974453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.775 [2024-07-15 08:28:39.974468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:59080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.775 [2024-07-15 08:28:39.974482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.775 [2024-07-15 08:28:39.974497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:59088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.775 [2024-07-15 08:28:39.974511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.775 [2024-07-15 08:28:39.974526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:59096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.775 [2024-07-15 08:28:39.974540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.775 [2024-07-15 08:28:39.974556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:59104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.775 [2024-07-15 08:28:39.974569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.775 [2024-07-15 08:28:39.974584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:59112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.775 [2024-07-15 08:28:39.974598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.775 [2024-07-15 08:28:39.974624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:59120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.775 [2024-07-15 08:28:39.974639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.775 [2024-07-15 08:28:39.974655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:59128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.775 [2024-07-15 08:28:39.974668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.775 [2024-07-15 08:28:39.974684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:59136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.775 [2024-07-15 08:28:39.974697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.775 [2024-07-15 08:28:39.974713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:59144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.775 [2024-07-15 08:28:39.974739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.775 [2024-07-15 08:28:39.974755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:59152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.775 [2024-07-15 08:28:39.974769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.775 [2024-07-15 08:28:39.974784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:59160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.775 [2024-07-15 08:28:39.974798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.775 [2024-07-15 08:28:39.974813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:59168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.775 [2024-07-15 08:28:39.974826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.775 [2024-07-15 08:28:39.974842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:59176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.775 [2024-07-15 08:28:39.974855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.775 [2024-07-15 08:28:39.974870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:59184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.775 [2024-07-15 08:28:39.974884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.775 [2024-07-15 08:28:39.974899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:59192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.775 [2024-07-15 08:28:39.974913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.775 [2024-07-15 08:28:39.974928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:59200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.775 [2024-07-15 08:28:39.974941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.775 [2024-07-15 08:28:39.974956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:59208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.775 [2024-07-15 08:28:39.974970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.775 [2024-07-15 08:28:39.974985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:59216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.775 [2024-07-15 08:28:39.975015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.775 [2024-07-15 08:28:39.975032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:59224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.775 [2024-07-15 08:28:39.975046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.775 [2024-07-15 08:28:39.975062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:59232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.775 [2024-07-15 08:28:39.975075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.775 [2024-07-15 08:28:39.975090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:59240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.775 [2024-07-15 08:28:39.975104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.775 [2024-07-15 08:28:39.975124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:59248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.775 [2024-07-15 08:28:39.975138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.775 [2024-07-15 08:28:39.975153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:59256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.775 [2024-07-15 08:28:39.975167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.775 [2024-07-15 08:28:39.975182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:59264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.775 [2024-07-15 08:28:39.975196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.775 [2024-07-15 08:28:39.975211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:59272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.775 [2024-07-15 08:28:39.975225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.775 [2024-07-15 08:28:39.975240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:59280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.775 [2024-07-15 08:28:39.975254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.775 [2024-07-15 08:28:39.975269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:59288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.775 [2024-07-15 08:28:39.975282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.775 [2024-07-15 08:28:39.975297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:59296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.775 [2024-07-15 08:28:39.975323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.775 [2024-07-15 08:28:39.975341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:59304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.775 [2024-07-15 08:28:39.975354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.775 [2024-07-15 08:28:39.975370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:59312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.775 [2024-07-15 08:28:39.975383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.775 [2024-07-15 08:28:39.975406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:59320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.775 [2024-07-15 08:28:39.975430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.775 [2024-07-15 08:28:39.975445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:59328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.775 [2024-07-15 08:28:39.975458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.775 [2024-07-15 08:28:39.975473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:59336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.775 [2024-07-15 08:28:39.975487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.775 [2024-07-15 08:28:39.975502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:59344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.775 [2024-07-15 08:28:39.975521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.775 [2024-07-15 08:28:39.975536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:59352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.775 [2024-07-15 08:28:39.975550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.775 [2024-07-15 08:28:39.975565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.775 [2024-07-15 08:28:39.975579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.775 [2024-07-15 08:28:39.975594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:59368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.775 [2024-07-15 08:28:39.975607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.775 [2024-07-15 08:28:39.975628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:59376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.775 [2024-07-15 08:28:39.975642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.775 [2024-07-15 08:28:39.975658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:59384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.775 [2024-07-15 08:28:39.975671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.775 [2024-07-15 08:28:39.975687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:59392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.775 [2024-07-15 08:28:39.975700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.775 [2024-07-15 08:28:39.975715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:59400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.775 [2024-07-15 08:28:39.975741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.775 [2024-07-15 08:28:39.975757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:59408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.775 [2024-07-15 08:28:39.975771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.775 [2024-07-15 08:28:39.975787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:59416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.775 [2024-07-15 08:28:39.975807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.775 [2024-07-15 08:28:39.975824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:59424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.775 [2024-07-15 08:28:39.975838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.775 [2024-07-15 08:28:39.975854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:59432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.775 [2024-07-15 08:28:39.975867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.775 [2024-07-15 08:28:39.975882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:59440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.775 [2024-07-15 08:28:39.975896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.776 [2024-07-15 08:28:39.975911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:59448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.776 [2024-07-15 08:28:39.975925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.776 [2024-07-15 08:28:39.975940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:59456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.776 [2024-07-15 08:28:39.975954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.776 [2024-07-15 08:28:39.975969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:59464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.776 [2024-07-15 08:28:39.975983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.776 [2024-07-15 08:28:39.975998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:59472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.776 [2024-07-15 08:28:39.976017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.776 [2024-07-15 08:28:39.976033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:59480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.776 [2024-07-15 08:28:39.976047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.776 [2024-07-15 08:28:39.976063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:59488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.776 [2024-07-15 08:28:39.976076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.776 [2024-07-15 08:28:39.976091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:59496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.776 [2024-07-15 08:28:39.976105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.776 [2024-07-15 08:28:39.976125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:59504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.776 [2024-07-15 08:28:39.976139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.776 [2024-07-15 08:28:39.976154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:59512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.776 [2024-07-15 08:28:39.976168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.776 [2024-07-15 08:28:39.976183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:59520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.776 [2024-07-15 08:28:39.976203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.776 [2024-07-15 08:28:39.976219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:59528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.776 [2024-07-15 08:28:39.976233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.776 [2024-07-15 08:28:39.976249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:59536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.776 [2024-07-15 08:28:39.976262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.776 [2024-07-15 08:28:39.976278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:59544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.776 [2024-07-15 08:28:39.976291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.776 [2024-07-15 08:28:39.976306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:59552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.776 [2024-07-15 08:28:39.976320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.776 [2024-07-15 08:28:39.976335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:59560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.776 [2024-07-15 08:28:39.976348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.776 [2024-07-15 08:28:39.976364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:59568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.776 [2024-07-15 08:28:39.976377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.776 [2024-07-15 08:28:39.976392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:59576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.776 [2024-07-15 08:28:39.976405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.776 [2024-07-15 08:28:39.976421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:59584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.776 [2024-07-15 08:28:39.976434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.776 [2024-07-15 08:28:39.976450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:59680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.776 [2024-07-15 08:28:39.976463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.776 [2024-07-15 08:28:39.976478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:59688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.776 [2024-07-15 08:28:39.976497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.776 [2024-07-15 08:28:39.976513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:59696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.776 [2024-07-15 08:28:39.976526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.776 [2024-07-15 08:28:39.976541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:59704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.776 [2024-07-15 08:28:39.976555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.776 [2024-07-15 08:28:39.976577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:59712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.776 [2024-07-15 08:28:39.976591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.776 [2024-07-15 08:28:39.976612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:59720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.776 [2024-07-15 08:28:39.976626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.776 [2024-07-15 08:28:39.976641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:59728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.776 [2024-07-15 08:28:39.976655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.776 [2024-07-15 08:28:39.976670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:59736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.776 [2024-07-15 08:28:39.976683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.776 [2024-07-15 08:28:39.976699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:59744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.776 [2024-07-15 08:28:39.976712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.776 [2024-07-15 08:28:39.976738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:59752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.776 [2024-07-15 08:28:39.976753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.776 [2024-07-15 08:28:39.976769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:59760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.776 [2024-07-15 08:28:39.976782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.776 [2024-07-15 08:28:39.976797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:59768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.776 [2024-07-15 08:28:39.976811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.776 [2024-07-15 08:28:39.976826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:59776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.776 [2024-07-15 08:28:39.976840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.776 [2024-07-15 08:28:39.976855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:59784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.776 [2024-07-15 08:28:39.976868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.776 [2024-07-15 08:28:39.976883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:59592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.776 [2024-07-15 08:28:39.976896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.776 [2024-07-15 08:28:39.976912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:59600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.776 [2024-07-15 08:28:39.976925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.776 [2024-07-15 08:28:39.976941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:59608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.776 [2024-07-15 08:28:39.976961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.776 [2024-07-15 08:28:39.976977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:59616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.776 [2024-07-15 08:28:39.976996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.776 [2024-07-15 08:28:39.977012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:59624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.776 [2024-07-15 08:28:39.977026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.776 [2024-07-15 08:28:39.977041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:59632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.776 [2024-07-15 08:28:39.977054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.776 [2024-07-15 08:28:39.977069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:59640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.776 [2024-07-15 08:28:39.977083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.776 [2024-07-15 08:28:39.977117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:59648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.776 [2024-07-15 08:28:39.977134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.776 [2024-07-15 08:28:39.977150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:59656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.776 [2024-07-15 08:28:39.977163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.776 [2024-07-15 08:28:39.977179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:59664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.776 [2024-07-15 08:28:39.977202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.776 [2024-07-15 08:28:39.977217] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb97c0 is same with the state(5) to be set 00:16:02.776 [2024-07-15 08:28:39.977233] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:02.776 [2024-07-15 08:28:39.977244] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:02.776 [2024-07-15 08:28:39.977255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59672 len:8 PRP1 0x0 PRP2 0x0 00:16:02.776 [2024-07-15 08:28:39.977269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.776 [2024-07-15 08:28:39.977329] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1cb97c0 was disconnected and freed. reset controller. 00:16:02.776 [2024-07-15 08:28:39.977347] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:16:02.776 [2024-07-15 08:28:39.977362] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:02.776 [2024-07-15 08:28:39.981234] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:02.776 [2024-07-15 08:28:39.981272] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c68570 (9): Bad file descriptor 00:16:02.776 [2024-07-15 08:28:40.018480] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:02.776 [2024-07-15 08:28:43.656593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:64888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.776 [2024-07-15 08:28:43.656702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.776 [2024-07-15 08:28:43.656751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:64896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.776 [2024-07-15 08:28:43.656769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.776 [2024-07-15 08:28:43.656785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.776 [2024-07-15 08:28:43.656799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.776 [2024-07-15 08:28:43.656815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:64912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.776 [2024-07-15 08:28:43.656829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.776 [2024-07-15 08:28:43.656844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:64920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.776 [2024-07-15 08:28:43.656858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.776 [2024-07-15 08:28:43.656873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:64928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.776 [2024-07-15 08:28:43.656887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.776 [2024-07-15 08:28:43.656902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:64936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.776 [2024-07-15 08:28:43.656916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.777 [2024-07-15 08:28:43.656931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:64944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.777 [2024-07-15 08:28:43.656945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.777 [2024-07-15 08:28:43.656960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:64376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.777 [2024-07-15 08:28:43.656974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.777 [2024-07-15 08:28:43.656989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:64384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.777 [2024-07-15 08:28:43.657003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.777 [2024-07-15 08:28:43.657018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:64392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.777 [2024-07-15 08:28:43.657032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.777 [2024-07-15 08:28:43.657047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:64400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.777 [2024-07-15 08:28:43.657061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.777 [2024-07-15 08:28:43.657077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:64408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.777 [2024-07-15 08:28:43.657090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.777 [2024-07-15 08:28:43.657117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:64416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.777 [2024-07-15 08:28:43.657132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.777 [2024-07-15 08:28:43.657148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:64424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.777 [2024-07-15 08:28:43.657162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.777 [2024-07-15 08:28:43.657178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:64432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.777 [2024-07-15 08:28:43.657192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.777 [2024-07-15 08:28:43.657208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:64952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.777 [2024-07-15 08:28:43.657221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.777 [2024-07-15 08:28:43.657240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:64960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.777 [2024-07-15 08:28:43.657255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.777 [2024-07-15 08:28:43.657271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:64968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.777 [2024-07-15 08:28:43.657285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.777 [2024-07-15 08:28:43.657301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:64976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.777 [2024-07-15 08:28:43.657315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.777 [2024-07-15 08:28:43.657330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:64984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.777 [2024-07-15 08:28:43.657344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.777 [2024-07-15 08:28:43.657360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:64992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.777 [2024-07-15 08:28:43.657375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.777 [2024-07-15 08:28:43.657390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:65000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.777 [2024-07-15 08:28:43.657404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.777 [2024-07-15 08:28:43.657421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:65008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.777 [2024-07-15 08:28:43.657435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.777 [2024-07-15 08:28:43.657451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:64440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.777 [2024-07-15 08:28:43.657465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.777 [2024-07-15 08:28:43.657480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:64448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.777 [2024-07-15 08:28:43.657494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.777 [2024-07-15 08:28:43.657518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:64456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.777 [2024-07-15 08:28:43.657533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.777 [2024-07-15 08:28:43.657549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.777 [2024-07-15 08:28:43.657562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.777 [2024-07-15 08:28:43.657578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:64472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.777 [2024-07-15 08:28:43.657592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.777 [2024-07-15 08:28:43.657608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:64480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.777 [2024-07-15 08:28:43.657622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.777 [2024-07-15 08:28:43.657638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:64488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.777 [2024-07-15 08:28:43.657651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.777 [2024-07-15 08:28:43.657667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:64496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.777 [2024-07-15 08:28:43.657681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.777 [2024-07-15 08:28:43.657697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:65016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.777 [2024-07-15 08:28:43.657711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.777 [2024-07-15 08:28:43.657740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:65024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.777 [2024-07-15 08:28:43.657755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.777 [2024-07-15 08:28:43.657772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:65032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.777 [2024-07-15 08:28:43.657786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.777 [2024-07-15 08:28:43.657802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:65040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.777 [2024-07-15 08:28:43.657815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.777 [2024-07-15 08:28:43.657831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:65048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.777 [2024-07-15 08:28:43.657845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.777 [2024-07-15 08:28:43.657861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:65056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.777 [2024-07-15 08:28:43.657875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.777 [2024-07-15 08:28:43.657891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:65064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.777 [2024-07-15 08:28:43.657988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.777 [2024-07-15 08:28:43.658008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:65072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.777 [2024-07-15 08:28:43.658023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.777 [2024-07-15 08:28:43.658039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:64504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.777 [2024-07-15 08:28:43.658053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.777 [2024-07-15 08:28:43.658069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:64512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.777 [2024-07-15 08:28:43.658083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.777 [2024-07-15 08:28:43.658099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:64520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.777 [2024-07-15 08:28:43.658114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.777 [2024-07-15 08:28:43.658129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:64528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.777 [2024-07-15 08:28:43.658143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.777 [2024-07-15 08:28:43.658159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:64536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.777 [2024-07-15 08:28:43.658172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.777 [2024-07-15 08:28:43.658188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:64544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.777 [2024-07-15 08:28:43.658202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.777 [2024-07-15 08:28:43.658218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:64552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.777 [2024-07-15 08:28:43.658231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.777 [2024-07-15 08:28:43.658247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:64560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.777 [2024-07-15 08:28:43.658261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.777 [2024-07-15 08:28:43.658277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:64568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.777 [2024-07-15 08:28:43.658291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.777 [2024-07-15 08:28:43.658307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:64576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.778 [2024-07-15 08:28:43.658321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.778 [2024-07-15 08:28:43.658337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:64584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.778 [2024-07-15 08:28:43.658357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.778 [2024-07-15 08:28:43.658381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:64592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.778 [2024-07-15 08:28:43.658396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.778 [2024-07-15 08:28:43.658412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:64600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.778 [2024-07-15 08:28:43.658426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.778 [2024-07-15 08:28:43.658443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:64608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.778 [2024-07-15 08:28:43.658457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.778 [2024-07-15 08:28:43.658473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:64616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.778 [2024-07-15 08:28:43.658486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.778 [2024-07-15 08:28:43.658502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:64624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.778 [2024-07-15 08:28:43.658516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.778 [2024-07-15 08:28:43.658532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:65080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.778 [2024-07-15 08:28:43.658546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.778 [2024-07-15 08:28:43.658561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:65088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.778 [2024-07-15 08:28:43.658575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.778 [2024-07-15 08:28:43.658591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:65096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.778 [2024-07-15 08:28:43.658605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.778 [2024-07-15 08:28:43.658621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:65104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.778 [2024-07-15 08:28:43.658635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.778 [2024-07-15 08:28:43.658650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:65112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.778 [2024-07-15 08:28:43.658664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.778 [2024-07-15 08:28:43.658680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:65120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.778 [2024-07-15 08:28:43.658693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.778 [2024-07-15 08:28:43.658709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:65128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.778 [2024-07-15 08:28:43.658736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.778 [2024-07-15 08:28:43.658754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:65136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.778 [2024-07-15 08:28:43.658777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.778 [2024-07-15 08:28:43.658794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:64632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.778 [2024-07-15 08:28:43.658809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.778 [2024-07-15 08:28:43.658825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:64640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.778 [2024-07-15 08:28:43.658839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.778 [2024-07-15 08:28:43.658855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:64648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.778 [2024-07-15 08:28:43.658868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.778 [2024-07-15 08:28:43.658884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:64656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.778 [2024-07-15 08:28:43.658898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.778 [2024-07-15 08:28:43.658915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:64664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.778 [2024-07-15 08:28:43.658929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.778 [2024-07-15 08:28:43.658944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:64672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.778 [2024-07-15 08:28:43.658958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.778 [2024-07-15 08:28:43.658974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:64680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.778 [2024-07-15 08:28:43.658987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.778 [2024-07-15 08:28:43.659003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:64688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.778 [2024-07-15 08:28:43.659017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.778 [2024-07-15 08:28:43.659032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:64696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.778 [2024-07-15 08:28:43.659046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.778 [2024-07-15 08:28:43.659062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:64704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.778 [2024-07-15 08:28:43.659075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.778 [2024-07-15 08:28:43.659091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:64712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.778 [2024-07-15 08:28:43.659104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.778 [2024-07-15 08:28:43.659120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:64720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.778 [2024-07-15 08:28:43.659134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.778 [2024-07-15 08:28:43.659151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:64728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.778 [2024-07-15 08:28:43.659172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.778 [2024-07-15 08:28:43.659188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:64736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.778 [2024-07-15 08:28:43.659202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.778 [2024-07-15 08:28:43.659218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:64744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.778 [2024-07-15 08:28:43.659232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.778 [2024-07-15 08:28:43.659248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:64752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.778 [2024-07-15 08:28:43.659261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.778 [2024-07-15 08:28:43.659277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:65144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.778 [2024-07-15 08:28:43.659290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.778 [2024-07-15 08:28:43.659318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:65152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.778 [2024-07-15 08:28:43.659334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.778 [2024-07-15 08:28:43.659351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:65160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.778 [2024-07-15 08:28:43.659365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.778 [2024-07-15 08:28:43.659381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:65168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.778 [2024-07-15 08:28:43.659395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.778 [2024-07-15 08:28:43.659411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:65176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.778 [2024-07-15 08:28:43.659425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.778 [2024-07-15 08:28:43.659441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:65184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.778 [2024-07-15 08:28:43.659454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.778 [2024-07-15 08:28:43.659470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:65192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.778 [2024-07-15 08:28:43.659484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.778 [2024-07-15 08:28:43.659500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:65200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.778 [2024-07-15 08:28:43.659514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.778 [2024-07-15 08:28:43.659530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:64760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.778 [2024-07-15 08:28:43.659544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.778 [2024-07-15 08:28:43.659567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:64768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.778 [2024-07-15 08:28:43.659582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.778 [2024-07-15 08:28:43.659598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:64776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.778 [2024-07-15 08:28:43.659612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.778 [2024-07-15 08:28:43.659628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:64784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.778 [2024-07-15 08:28:43.659642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.778 [2024-07-15 08:28:43.659658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:64792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.778 [2024-07-15 08:28:43.659672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.778 [2024-07-15 08:28:43.659688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:64800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.778 [2024-07-15 08:28:43.659701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.778 [2024-07-15 08:28:43.659728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:64808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.778 [2024-07-15 08:28:43.659745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.778 [2024-07-15 08:28:43.659762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:64816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.778 [2024-07-15 08:28:43.659776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.778 [2024-07-15 08:28:43.659792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:64824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.778 [2024-07-15 08:28:43.659806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.778 [2024-07-15 08:28:43.659822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:64832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.778 [2024-07-15 08:28:43.659836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.778 [2024-07-15 08:28:43.659852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:64840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.778 [2024-07-15 08:28:43.659866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.778 [2024-07-15 08:28:43.659882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:64848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.778 [2024-07-15 08:28:43.659896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.778 [2024-07-15 08:28:43.659913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:64856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.778 [2024-07-15 08:28:43.659927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.778 [2024-07-15 08:28:43.659942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:64864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.778 [2024-07-15 08:28:43.659971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.778 [2024-07-15 08:28:43.659989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:64872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.778 [2024-07-15 08:28:43.660003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.779 [2024-07-15 08:28:43.660019] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cead30 is same with the state(5) to be set 00:16:02.779 [2024-07-15 08:28:43.660039] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:02.779 [2024-07-15 08:28:43.660049] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:02.779 [2024-07-15 08:28:43.660061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64880 len:8 PRP1 0x0 PRP2 0x0 00:16:02.779 [2024-07-15 08:28:43.660074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.779 [2024-07-15 08:28:43.660090] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:02.779 [2024-07-15 08:28:43.660100] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:02.779 [2024-07-15 08:28:43.660111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65208 len:8 PRP1 0x0 PRP2 0x0 00:16:02.779 [2024-07-15 08:28:43.660125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.779 [2024-07-15 08:28:43.660139] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:02.779 [2024-07-15 08:28:43.660149] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:02.779 [2024-07-15 08:28:43.660160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65216 len:8 PRP1 0x0 PRP2 0x0 00:16:02.779 [2024-07-15 08:28:43.660173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.779 [2024-07-15 08:28:43.660187] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:02.779 [2024-07-15 08:28:43.660198] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:02.779 [2024-07-15 08:28:43.660208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65224 len:8 PRP1 0x0 PRP2 0x0 00:16:02.779 [2024-07-15 08:28:43.660222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.779 [2024-07-15 08:28:43.660235] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:02.779 [2024-07-15 08:28:43.660245] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:02.779 [2024-07-15 08:28:43.660256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65232 len:8 PRP1 0x0 PRP2 0x0 00:16:02.779 [2024-07-15 08:28:43.660269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.779 [2024-07-15 08:28:43.660283] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:02.779 [2024-07-15 08:28:43.660293] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:02.779 [2024-07-15 08:28:43.660303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65240 len:8 PRP1 0x0 PRP2 0x0 00:16:02.779 [2024-07-15 08:28:43.660317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.779 [2024-07-15 08:28:43.660332] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:02.779 [2024-07-15 08:28:43.660342] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:02.779 [2024-07-15 08:28:43.660360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65248 len:8 PRP1 0x0 PRP2 0x0 00:16:02.779 [2024-07-15 08:28:43.660375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.779 [2024-07-15 08:28:43.660389] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:02.779 [2024-07-15 08:28:43.660400] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:02.779 [2024-07-15 08:28:43.660410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65256 len:8 PRP1 0x0 PRP2 0x0 00:16:02.779 [2024-07-15 08:28:43.660423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.779 [2024-07-15 08:28:43.660437] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:02.779 [2024-07-15 08:28:43.660448] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:02.779 [2024-07-15 08:28:43.660458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65264 len:8 PRP1 0x0 PRP2 0x0 00:16:02.779 [2024-07-15 08:28:43.660472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.779 [2024-07-15 08:28:43.660485] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:02.779 [2024-07-15 08:28:43.660495] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:02.779 [2024-07-15 08:28:43.660506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65272 len:8 PRP1 0x0 PRP2 0x0 00:16:02.779 [2024-07-15 08:28:43.660520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.779 [2024-07-15 08:28:43.660534] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:02.779 [2024-07-15 08:28:43.660544] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:02.779 [2024-07-15 08:28:43.660555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65280 len:8 PRP1 0x0 PRP2 0x0 00:16:02.779 [2024-07-15 08:28:43.660568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.779 [2024-07-15 08:28:43.660582] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:02.779 [2024-07-15 08:28:43.660593] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:02.779 [2024-07-15 08:28:43.660603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65288 len:8 PRP1 0x0 PRP2 0x0 00:16:02.779 [2024-07-15 08:28:43.660616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.779 [2024-07-15 08:28:43.660630] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:02.779 [2024-07-15 08:28:43.660640] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:02.779 [2024-07-15 08:28:43.660651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65296 len:8 PRP1 0x0 PRP2 0x0 00:16:02.779 [2024-07-15 08:28:43.660665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.779 [2024-07-15 08:28:43.660679] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:02.779 [2024-07-15 08:28:43.660701] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:02.779 [2024-07-15 08:28:43.660712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65304 len:8 PRP1 0x0 PRP2 0x0 00:16:02.779 [2024-07-15 08:28:43.660738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.779 [2024-07-15 08:28:43.660753] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:02.779 [2024-07-15 08:28:43.660770] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:02.779 [2024-07-15 08:28:43.660781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65312 len:8 PRP1 0x0 PRP2 0x0 00:16:02.779 [2024-07-15 08:28:43.660794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.779 [2024-07-15 08:28:43.660809] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:02.779 [2024-07-15 08:28:43.660819] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:02.779 [2024-07-15 08:28:43.660830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65320 len:8 PRP1 0x0 PRP2 0x0 00:16:02.779 [2024-07-15 08:28:43.660843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.779 [2024-07-15 08:28:43.660857] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:02.779 [2024-07-15 08:28:43.660868] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:02.779 [2024-07-15 08:28:43.660878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65328 len:8 PRP1 0x0 PRP2 0x0 00:16:02.779 [2024-07-15 08:28:43.660891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.779 [2024-07-15 08:28:43.660905] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:02.779 [2024-07-15 08:28:43.660916] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:02.779 [2024-07-15 08:28:43.660927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65336 len:8 PRP1 0x0 PRP2 0x0 00:16:02.779 [2024-07-15 08:28:43.660940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.779 [2024-07-15 08:28:43.660954] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:02.779 [2024-07-15 08:28:43.660964] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:02.779 [2024-07-15 08:28:43.660974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65344 len:8 PRP1 0x0 PRP2 0x0 00:16:02.779 [2024-07-15 08:28:43.660987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.779 [2024-07-15 08:28:43.661001] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:02.779 [2024-07-15 08:28:43.661011] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:02.779 [2024-07-15 08:28:43.661021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65352 len:8 PRP1 0x0 PRP2 0x0 00:16:02.779 [2024-07-15 08:28:43.661034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.779 [2024-07-15 08:28:43.661047] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:02.779 [2024-07-15 08:28:43.661058] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:02.779 [2024-07-15 08:28:43.661068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65360 len:8 PRP1 0x0 PRP2 0x0 00:16:02.779 [2024-07-15 08:28:43.661081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.779 [2024-07-15 08:28:43.661095] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:02.779 [2024-07-15 08:28:43.661110] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:02.779 [2024-07-15 08:28:43.661121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65368 len:8 PRP1 0x0 PRP2 0x0 00:16:02.779 [2024-07-15 08:28:43.661135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.779 [2024-07-15 08:28:43.661155] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:02.779 [2024-07-15 08:28:43.661166] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:02.779 [2024-07-15 08:28:43.661176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65376 len:8 PRP1 0x0 PRP2 0x0 00:16:02.779 [2024-07-15 08:28:43.661190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.779 [2024-07-15 08:28:43.661203] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:02.779 [2024-07-15 08:28:43.661213] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:02.779 [2024-07-15 08:28:43.661224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65384 len:8 PRP1 0x0 PRP2 0x0 00:16:02.779 [2024-07-15 08:28:43.661237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.779 [2024-07-15 08:28:43.661251] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:02.779 [2024-07-15 08:28:43.661261] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:02.779 [2024-07-15 08:28:43.661271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65392 len:8 PRP1 0x0 PRP2 0x0 00:16:02.779 [2024-07-15 08:28:43.661285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.779 [2024-07-15 08:28:43.661366] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1cead30 was disconnected and freed. reset controller. 00:16:02.779 [2024-07-15 08:28:43.661391] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:16:02.779 [2024-07-15 08:28:43.661481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.779 [2024-07-15 08:28:43.661503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.779 [2024-07-15 08:28:43.661519] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.779 [2024-07-15 08:28:43.661532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.779 [2024-07-15 08:28:43.661546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.779 [2024-07-15 08:28:43.661559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.779 [2024-07-15 08:28:43.661574] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.779 [2024-07-15 08:28:43.661587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.779 [2024-07-15 08:28:43.661600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:02.779 [2024-07-15 08:28:43.661649] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c68570 (9): Bad file descriptor 00:16:02.779 [2024-07-15 08:28:43.665610] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:02.779 [2024-07-15 08:28:43.700765] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:02.779 [2024-07-15 08:28:48.188580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.779 [2024-07-15 08:28:48.188633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.779 [2024-07-15 08:28:48.188688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.779 [2024-07-15 08:28:48.188706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.779 [2024-07-15 08:28:48.188722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.779 [2024-07-15 08:28:48.188751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.779 [2024-07-15 08:28:48.188770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.780 [2024-07-15 08:28:48.188783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.780 [2024-07-15 08:28:48.188798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:1136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.780 [2024-07-15 08:28:48.188812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.780 [2024-07-15 08:28:48.188827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:1144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.780 [2024-07-15 08:28:48.188840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.780 [2024-07-15 08:28:48.188855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:1152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.780 [2024-07-15 08:28:48.188868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.780 [2024-07-15 08:28:48.188883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:1160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.780 [2024-07-15 08:28:48.188896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.780 [2024-07-15 08:28:48.188911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:1168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.780 [2024-07-15 08:28:48.188924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.780 [2024-07-15 08:28:48.188939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:1176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.780 [2024-07-15 08:28:48.188953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.780 [2024-07-15 08:28:48.188968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:1184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.780 [2024-07-15 08:28:48.188981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.780 [2024-07-15 08:28:48.188996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:1192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.780 [2024-07-15 08:28:48.189009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.780 [2024-07-15 08:28:48.189024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:1200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.780 [2024-07-15 08:28:48.189037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.780 [2024-07-15 08:28:48.189052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:1208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.780 [2024-07-15 08:28:48.189082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.780 [2024-07-15 08:28:48.189107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:1216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.780 [2024-07-15 08:28:48.189122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.780 [2024-07-15 08:28:48.189138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:1224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.780 [2024-07-15 08:28:48.189152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.780 [2024-07-15 08:28:48.189167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:1232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.780 [2024-07-15 08:28:48.189184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.780 [2024-07-15 08:28:48.189200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:1240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.780 [2024-07-15 08:28:48.189214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.780 [2024-07-15 08:28:48.189229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.780 [2024-07-15 08:28:48.189243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.780 [2024-07-15 08:28:48.189258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.780 [2024-07-15 08:28:48.189272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.780 [2024-07-15 08:28:48.189287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.780 [2024-07-15 08:28:48.189301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.780 [2024-07-15 08:28:48.189317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.780 [2024-07-15 08:28:48.189331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.780 [2024-07-15 08:28:48.189346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.780 [2024-07-15 08:28:48.189360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.780 [2024-07-15 08:28:48.189375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.780 [2024-07-15 08:28:48.189389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.780 [2024-07-15 08:28:48.189405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.780 [2024-07-15 08:28:48.189419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.780 [2024-07-15 08:28:48.189434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.780 [2024-07-15 08:28:48.189462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.780 [2024-07-15 08:28:48.189477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:1248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.780 [2024-07-15 08:28:48.189576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.780 [2024-07-15 08:28:48.189594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:1256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.780 [2024-07-15 08:28:48.189608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.780 [2024-07-15 08:28:48.189623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:1264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.780 [2024-07-15 08:28:48.189637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.780 [2024-07-15 08:28:48.189653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.780 [2024-07-15 08:28:48.189666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.780 [2024-07-15 08:28:48.189681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:1280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.780 [2024-07-15 08:28:48.189694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.780 [2024-07-15 08:28:48.189709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.780 [2024-07-15 08:28:48.189723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.780 [2024-07-15 08:28:48.189738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:1296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.780 [2024-07-15 08:28:48.189763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.780 [2024-07-15 08:28:48.189780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:1304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.780 [2024-07-15 08:28:48.189794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.780 [2024-07-15 08:28:48.189809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:1312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.780 [2024-07-15 08:28:48.189822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.780 [2024-07-15 08:28:48.189838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:1320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.780 [2024-07-15 08:28:48.189851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.780 [2024-07-15 08:28:48.189866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:1328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.780 [2024-07-15 08:28:48.189879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.780 [2024-07-15 08:28:48.189894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:1336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.780 [2024-07-15 08:28:48.189908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.780 [2024-07-15 08:28:48.189923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:1344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.780 [2024-07-15 08:28:48.189936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.780 [2024-07-15 08:28:48.189960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:1352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.780 [2024-07-15 08:28:48.189974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.780 [2024-07-15 08:28:48.189989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:1360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.780 [2024-07-15 08:28:48.190003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.780 [2024-07-15 08:28:48.190018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:1368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.780 [2024-07-15 08:28:48.190031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.780 [2024-07-15 08:28:48.190046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:1376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.780 [2024-07-15 08:28:48.190076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.780 [2024-07-15 08:28:48.190092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:1384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.780 [2024-07-15 08:28:48.190106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.780 [2024-07-15 08:28:48.190121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.780 [2024-07-15 08:28:48.190135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.780 [2024-07-15 08:28:48.190150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.780 [2024-07-15 08:28:48.190164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.780 [2024-07-15 08:28:48.190188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.780 [2024-07-15 08:28:48.190203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.780 [2024-07-15 08:28:48.190219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.780 [2024-07-15 08:28:48.190233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.780 [2024-07-15 08:28:48.190248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.780 [2024-07-15 08:28:48.190261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.780 [2024-07-15 08:28:48.190277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.780 [2024-07-15 08:28:48.190290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.780 [2024-07-15 08:28:48.190306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.780 [2024-07-15 08:28:48.190319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.780 [2024-07-15 08:28:48.190334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.781 [2024-07-15 08:28:48.190348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.781 [2024-07-15 08:28:48.190370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:1392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.781 [2024-07-15 08:28:48.190385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.781 [2024-07-15 08:28:48.190400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:1400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.781 [2024-07-15 08:28:48.190415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.781 [2024-07-15 08:28:48.190431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.781 [2024-07-15 08:28:48.190460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.781 [2024-07-15 08:28:48.190492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.781 [2024-07-15 08:28:48.190506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.781 [2024-07-15 08:28:48.190521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.781 [2024-07-15 08:28:48.190535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.781 [2024-07-15 08:28:48.190550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:1432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.781 [2024-07-15 08:28:48.190564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.781 [2024-07-15 08:28:48.190579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:1440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.781 [2024-07-15 08:28:48.190593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.781 [2024-07-15 08:28:48.190609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:1448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.781 [2024-07-15 08:28:48.190623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.781 [2024-07-15 08:28:48.190638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.781 [2024-07-15 08:28:48.190652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.781 [2024-07-15 08:28:48.190667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:1464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.781 [2024-07-15 08:28:48.190681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.781 [2024-07-15 08:28:48.190696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:1472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.781 [2024-07-15 08:28:48.190710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.781 [2024-07-15 08:28:48.190725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:1480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.781 [2024-07-15 08:28:48.190739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.781 [2024-07-15 08:28:48.190754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.781 [2024-07-15 08:28:48.190787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.781 [2024-07-15 08:28:48.190804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:1496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.781 [2024-07-15 08:28:48.190819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.781 [2024-07-15 08:28:48.190834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:1504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.781 [2024-07-15 08:28:48.190848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.781 [2024-07-15 08:28:48.190864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:1512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.781 [2024-07-15 08:28:48.190877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.781 [2024-07-15 08:28:48.190893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.781 [2024-07-15 08:28:48.190906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.781 [2024-07-15 08:28:48.190922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.781 [2024-07-15 08:28:48.190936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.781 [2024-07-15 08:28:48.190966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.781 [2024-07-15 08:28:48.190980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.781 [2024-07-15 08:28:48.190995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.781 [2024-07-15 08:28:48.191008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.781 [2024-07-15 08:28:48.191023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.781 [2024-07-15 08:28:48.191036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.781 [2024-07-15 08:28:48.191051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.781 [2024-07-15 08:28:48.191081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.781 [2024-07-15 08:28:48.191097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.781 [2024-07-15 08:28:48.191111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.781 [2024-07-15 08:28:48.191127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.781 [2024-07-15 08:28:48.191140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.781 [2024-07-15 08:28:48.191156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.781 [2024-07-15 08:28:48.191170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.781 [2024-07-15 08:28:48.191192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.781 [2024-07-15 08:28:48.191207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.781 [2024-07-15 08:28:48.191222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.781 [2024-07-15 08:28:48.191236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.781 [2024-07-15 08:28:48.191252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.781 [2024-07-15 08:28:48.191266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.781 [2024-07-15 08:28:48.191281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.781 [2024-07-15 08:28:48.191295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.781 [2024-07-15 08:28:48.191324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.781 [2024-07-15 08:28:48.191346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.781 [2024-07-15 08:28:48.191361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.781 [2024-07-15 08:28:48.191375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.781 [2024-07-15 08:28:48.191390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:1576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.781 [2024-07-15 08:28:48.191404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.781 [2024-07-15 08:28:48.191420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:1584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.781 [2024-07-15 08:28:48.191434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.781 [2024-07-15 08:28:48.191450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:1592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.781 [2024-07-15 08:28:48.191463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.781 [2024-07-15 08:28:48.191479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:1600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.781 [2024-07-15 08:28:48.191493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.781 [2024-07-15 08:28:48.191508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:1608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.781 [2024-07-15 08:28:48.191522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.781 [2024-07-15 08:28:48.191537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:1616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.781 [2024-07-15 08:28:48.191551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.781 [2024-07-15 08:28:48.191566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:1624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.781 [2024-07-15 08:28:48.191580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.781 [2024-07-15 08:28:48.191619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.781 [2024-07-15 08:28:48.191642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.781 [2024-07-15 08:28:48.191657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:1640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.781 [2024-07-15 08:28:48.191671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.781 [2024-07-15 08:28:48.191686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.781 [2024-07-15 08:28:48.191699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.781 [2024-07-15 08:28:48.191714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.781 [2024-07-15 08:28:48.191727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.781 [2024-07-15 08:28:48.191766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.781 [2024-07-15 08:28:48.191781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.781 [2024-07-15 08:28:48.191796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.781 [2024-07-15 08:28:48.191810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.781 [2024-07-15 08:28:48.191832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.781 [2024-07-15 08:28:48.191845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.781 [2024-07-15 08:28:48.191860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.781 [2024-07-15 08:28:48.191874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.781 [2024-07-15 08:28:48.191889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.781 [2024-07-15 08:28:48.191903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.781 [2024-07-15 08:28:48.191918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:1000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.781 [2024-07-15 08:28:48.191932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.781 [2024-07-15 08:28:48.191948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:1008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.781 [2024-07-15 08:28:48.191961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.781 [2024-07-15 08:28:48.191977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:1016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.781 [2024-07-15 08:28:48.191991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.781 [2024-07-15 08:28:48.192007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:1024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.781 [2024-07-15 08:28:48.192028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.782 [2024-07-15 08:28:48.192044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:1032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.782 [2024-07-15 08:28:48.192058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.782 [2024-07-15 08:28:48.192074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:1040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.782 [2024-07-15 08:28:48.192088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.782 [2024-07-15 08:28:48.192103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:1048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.782 [2024-07-15 08:28:48.192117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.782 [2024-07-15 08:28:48.192132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:1056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.782 [2024-07-15 08:28:48.192151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.782 [2024-07-15 08:28:48.192167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:1064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.782 [2024-07-15 08:28:48.192181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.782 [2024-07-15 08:28:48.192196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:1072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.782 [2024-07-15 08:28:48.192210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.782 [2024-07-15 08:28:48.192225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:1080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.782 [2024-07-15 08:28:48.192239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.782 [2024-07-15 08:28:48.192254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:1088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.782 [2024-07-15 08:28:48.192268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.782 [2024-07-15 08:28:48.192283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:1096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.782 [2024-07-15 08:28:48.192297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.782 [2024-07-15 08:28:48.192312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:1104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.782 [2024-07-15 08:28:48.192326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.782 [2024-07-15 08:28:48.192341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:1112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.782 [2024-07-15 08:28:48.192355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.782 [2024-07-15 08:28:48.192370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:1120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.782 [2024-07-15 08:28:48.192384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.782 [2024-07-15 08:28:48.192421] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9dd0 is same with the state(5) to be set 00:16:02.782 [2024-07-15 08:28:48.192440] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:02.782 [2024-07-15 08:28:48.192450] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:02.782 [2024-07-15 08:28:48.192461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1128 len:8 PRP1 0x0 PRP2 0x0 00:16:02.782 [2024-07-15 08:28:48.192491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.782 [2024-07-15 08:28:48.192514] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:02.782 [2024-07-15 08:28:48.192525] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:02.782 [2024-07-15 08:28:48.192536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1648 len:8 PRP1 0x0 PRP2 0x0 00:16:02.782 [2024-07-15 08:28:48.192549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.782 [2024-07-15 08:28:48.192562] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:02.782 [2024-07-15 08:28:48.192573] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:02.782 [2024-07-15 08:28:48.192583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1656 len:8 PRP1 0x0 PRP2 0x0 00:16:02.782 [2024-07-15 08:28:48.192596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.782 [2024-07-15 08:28:48.192610] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:02.782 [2024-07-15 08:28:48.192620] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:02.782 [2024-07-15 08:28:48.192635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:8 PRP1 0x0 PRP2 0x0 00:16:02.782 [2024-07-15 08:28:48.192649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.782 [2024-07-15 08:28:48.192663] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:02.782 [2024-07-15 08:28:48.192673] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:02.782 [2024-07-15 08:28:48.192683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1672 len:8 PRP1 0x0 PRP2 0x0 00:16:02.782 [2024-07-15 08:28:48.192696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.782 [2024-07-15 08:28:48.192710] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:02.782 [2024-07-15 08:28:48.192720] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:02.782 [2024-07-15 08:28:48.192731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1680 len:8 PRP1 0x0 PRP2 0x0 00:16:02.782 [2024-07-15 08:28:48.192744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.782 [2024-07-15 08:28:48.192771] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:02.782 [2024-07-15 08:28:48.192782] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:02.782 [2024-07-15 08:28:48.192793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1688 len:8 PRP1 0x0 PRP2 0x0 00:16:02.782 [2024-07-15 08:28:48.192806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.782 [2024-07-15 08:28:48.192821] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:02.782 [2024-07-15 08:28:48.192831] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:02.782 [2024-07-15 08:28:48.192849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:8 PRP1 0x0 PRP2 0x0 00:16:02.782 [2024-07-15 08:28:48.192864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.782 [2024-07-15 08:28:48.192893] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:02.782 [2024-07-15 08:28:48.192902] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:02.782 [2024-07-15 08:28:48.192913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1704 len:8 PRP1 0x0 PRP2 0x0 00:16:02.782 [2024-07-15 08:28:48.192926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.782 [2024-07-15 08:28:48.192939] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:02.782 [2024-07-15 08:28:48.192949] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:02.782 [2024-07-15 08:28:48.192959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1712 len:8 PRP1 0x0 PRP2 0x0 00:16:02.782 [2024-07-15 08:28:48.192972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.782 [2024-07-15 08:28:48.192985] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:02.782 [2024-07-15 08:28:48.192995] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:02.782 [2024-07-15 08:28:48.193005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1720 len:8 PRP1 0x0 PRP2 0x0 00:16:02.782 [2024-07-15 08:28:48.193018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.782 [2024-07-15 08:28:48.193031] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:02.782 [2024-07-15 08:28:48.193041] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:02.782 [2024-07-15 08:28:48.193081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:8 PRP1 0x0 PRP2 0x0 00:16:02.782 [2024-07-15 08:28:48.193095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.782 [2024-07-15 08:28:48.193109] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:02.782 [2024-07-15 08:28:48.193119] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:02.782 [2024-07-15 08:28:48.193129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1736 len:8 PRP1 0x0 PRP2 0x0 00:16:02.782 [2024-07-15 08:28:48.193142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.782 [2024-07-15 08:28:48.193199] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1ce9dd0 was disconnected and freed. reset controller. 00:16:02.782 [2024-07-15 08:28:48.193217] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:16:02.782 [2024-07-15 08:28:48.193272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.782 [2024-07-15 08:28:48.193293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.782 [2024-07-15 08:28:48.193309] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.782 [2024-07-15 08:28:48.193322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.782 [2024-07-15 08:28:48.193337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.782 [2024-07-15 08:28:48.193350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.782 [2024-07-15 08:28:48.193378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.782 [2024-07-15 08:28:48.193393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.782 [2024-07-15 08:28:48.193407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:02.782 [2024-07-15 08:28:48.197273] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:02.782 [2024-07-15 08:28:48.197312] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c68570 (9): Bad file descriptor 00:16:02.782 [2024-07-15 08:28:48.230162] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:02.782 00:16:02.782 Latency(us) 00:16:02.782 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:02.782 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:02.782 Verification LBA range: start 0x0 length 0x4000 00:16:02.782 NVMe0n1 : 15.01 8700.01 33.98 212.48 0.00 14329.10 647.91 18707.55 00:16:02.782 =================================================================================================================== 00:16:02.782 Total : 8700.01 33.98 212.48 0.00 14329.10 647.91 18707.55 00:16:02.782 Received shutdown signal, test time was about 15.000000 seconds 00:16:02.782 00:16:02.782 Latency(us) 00:16:02.782 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:02.782 =================================================================================================================== 00:16:02.782 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:02.782 08:28:54 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:16:02.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:02.782 08:28:54 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:16:02.782 08:28:54 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:16:02.782 08:28:54 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=76149 00:16:02.782 08:28:54 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 76149 /var/tmp/bdevperf.sock 00:16:02.782 08:28:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 76149 ']' 00:16:02.782 08:28:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:02.782 08:28:54 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:16:02.782 08:28:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:02.782 08:28:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:02.782 08:28:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:02.782 08:28:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:03.041 08:28:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:03.041 08:28:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:16:03.041 08:28:55 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:03.300 [2024-07-15 08:28:55.421460] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:03.300 08:28:55 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:16:03.559 [2024-07-15 08:28:55.729769] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:16:03.818 08:28:55 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:04.076 NVMe0n1 00:16:04.076 08:28:56 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:04.343 00:16:04.343 08:28:56 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:04.617 00:16:04.617 08:28:56 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:04.617 08:28:56 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:16:04.875 08:28:56 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:05.133 08:28:57 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:16:08.449 08:29:00 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:08.449 08:29:00 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:16:08.449 08:29:00 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=76226 00:16:08.449 08:29:00 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 76226 00:16:08.449 08:29:00 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:09.819 0 00:16:09.819 08:29:01 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:09.819 [2024-07-15 08:28:54.199177] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:09.819 [2024-07-15 08:28:54.199369] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76149 ] 00:16:09.819 [2024-07-15 08:28:54.342209] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:09.819 [2024-07-15 08:28:54.470927] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:09.819 [2024-07-15 08:28:54.531137] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:09.819 [2024-07-15 08:28:57.253427] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:16:09.819 [2024-07-15 08:28:57.253594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:09.819 [2024-07-15 08:28:57.253625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.819 [2024-07-15 08:28:57.253648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:09.819 [2024-07-15 08:28:57.253664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.819 [2024-07-15 08:28:57.253681] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:09.819 [2024-07-15 08:28:57.253697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.819 [2024-07-15 08:28:57.253715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:09.819 [2024-07-15 08:28:57.253769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.819 [2024-07-15 08:28:57.253789] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:09.819 [2024-07-15 08:28:57.253851] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:09.819 [2024-07-15 08:28:57.253893] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14fc570 (9): Bad file descriptor 00:16:09.819 [2024-07-15 08:28:57.265420] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:09.819 Running I/O for 1 seconds... 00:16:09.819 00:16:09.819 Latency(us) 00:16:09.819 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:09.819 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:09.819 Verification LBA range: start 0x0 length 0x4000 00:16:09.819 NVMe0n1 : 1.01 8408.43 32.85 0.00 0.00 15129.69 3217.22 15252.01 00:16:09.819 =================================================================================================================== 00:16:09.819 Total : 8408.43 32.85 0.00 0.00 15129.69 3217.22 15252.01 00:16:09.819 08:29:01 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:09.819 08:29:01 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:16:10.077 08:29:02 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:10.335 08:29:02 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:10.335 08:29:02 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:16:10.335 08:29:02 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:10.901 08:29:02 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:16:14.184 08:29:05 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:14.184 08:29:05 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:16:14.184 08:29:06 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 76149 00:16:14.184 08:29:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 76149 ']' 00:16:14.184 08:29:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 76149 00:16:14.184 08:29:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:16:14.184 08:29:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:14.184 08:29:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76149 00:16:14.184 killing process with pid 76149 00:16:14.184 08:29:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:14.184 08:29:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:14.184 08:29:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76149' 00:16:14.184 08:29:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 76149 00:16:14.184 08:29:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 76149 00:16:14.184 08:29:06 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:16:14.184 08:29:06 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:14.442 08:29:06 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:16:14.442 08:29:06 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:14.442 08:29:06 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:16:14.442 08:29:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:14.442 08:29:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:16:14.442 08:29:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:14.442 08:29:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:16:14.442 08:29:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:14.442 08:29:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:14.442 rmmod nvme_tcp 00:16:14.442 rmmod nvme_fabrics 00:16:14.442 rmmod nvme_keyring 00:16:14.700 08:29:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:14.700 08:29:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:16:14.700 08:29:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:16:14.700 08:29:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 75889 ']' 00:16:14.700 08:29:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 75889 00:16:14.700 08:29:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 75889 ']' 00:16:14.700 08:29:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 75889 00:16:14.700 08:29:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:16:14.700 08:29:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:14.700 08:29:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75889 00:16:14.700 killing process with pid 75889 00:16:14.700 08:29:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:14.700 08:29:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:14.700 08:29:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75889' 00:16:14.700 08:29:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 75889 00:16:14.700 08:29:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 75889 00:16:14.959 08:29:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:14.959 08:29:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:14.959 08:29:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:14.959 08:29:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:14.959 08:29:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:14.960 08:29:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:14.960 08:29:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:14.960 08:29:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:14.960 08:29:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:14.960 00:16:14.960 real 0m33.593s 00:16:14.960 user 2m10.618s 00:16:14.960 sys 0m5.686s 00:16:14.960 08:29:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:14.960 08:29:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:14.960 ************************************ 00:16:14.960 END TEST nvmf_failover 00:16:14.960 ************************************ 00:16:14.960 08:29:06 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:14.960 08:29:06 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:16:14.960 08:29:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:14.960 08:29:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:14.960 08:29:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:14.960 ************************************ 00:16:14.960 START TEST nvmf_host_discovery 00:16:14.960 ************************************ 00:16:14.960 08:29:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:16:14.960 * Looking for test storage... 00:16:14.960 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:14.960 08:29:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:14.960 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:16:14.960 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:14.960 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:14.960 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:14.960 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:14.960 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:14.960 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:14.960 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:14.960 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:14.960 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:14.960 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:14.960 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:16:14.960 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:16:14.960 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:14.960 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:14.960 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:14.960 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:14.960 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:14.960 08:29:07 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:14.960 08:29:07 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:14.960 08:29:07 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:14.960 08:29:07 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.960 08:29:07 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.960 08:29:07 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.960 08:29:07 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:16:14.960 08:29:07 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.960 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:16:14.960 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:14.960 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:14.960 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:14.960 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:14.960 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:14.960 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:14.960 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:14.960 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:14.960 08:29:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:16:14.960 08:29:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:16:14.960 08:29:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:16:14.960 08:29:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:16:14.960 08:29:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:16:14.960 08:29:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:16:14.960 08:29:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:16:14.960 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:14.960 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:14.960 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:14.960 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:14.960 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:14.960 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:14.960 08:29:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:14.960 08:29:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:14.960 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:14.960 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:14.960 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:14.960 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:14.960 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:14.960 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:14.960 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:14.960 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:14.960 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:14.960 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:14.960 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:14.960 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:14.960 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:14.960 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:14.960 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:14.960 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:14.960 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:14.960 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:14.960 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:15.217 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:15.217 Cannot find device "nvmf_tgt_br" 00:16:15.217 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # true 00:16:15.217 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:15.217 Cannot find device "nvmf_tgt_br2" 00:16:15.217 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # true 00:16:15.217 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:15.217 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:15.217 Cannot find device "nvmf_tgt_br" 00:16:15.217 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # true 00:16:15.217 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:15.217 Cannot find device "nvmf_tgt_br2" 00:16:15.217 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # true 00:16:15.217 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:15.217 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:15.217 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:15.217 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:15.217 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:16:15.217 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:15.217 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:15.217 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:16:15.217 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:15.217 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:15.217 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:15.217 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:15.217 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:15.217 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:15.217 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:15.217 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:15.217 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:15.217 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:15.217 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:15.217 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:15.217 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:15.217 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:15.217 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:15.217 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:15.475 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:15.475 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:15.475 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:15.475 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:15.475 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:15.475 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:15.475 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:15.475 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:15.475 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:15.475 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:16:15.475 00:16:15.475 --- 10.0.0.2 ping statistics --- 00:16:15.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:15.475 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:16:15.475 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:15.475 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:15.475 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.036 ms 00:16:15.475 00:16:15.475 --- 10.0.0.3 ping statistics --- 00:16:15.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:15.475 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:16:15.475 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:15.475 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:15.475 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:16:15.475 00:16:15.475 --- 10.0.0.1 ping statistics --- 00:16:15.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:15.475 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:16:15.475 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:15.475 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@433 -- # return 0 00:16:15.475 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:15.475 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:15.475 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:15.475 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:15.475 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:15.475 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:15.475 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:15.475 08:29:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:16:15.475 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:15.475 08:29:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:15.475 08:29:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:15.475 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=76496 00:16:15.475 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:15.475 08:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 76496 00:16:15.475 08:29:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 76496 ']' 00:16:15.475 08:29:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:15.475 08:29:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:15.475 08:29:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:15.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:15.475 08:29:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:15.475 08:29:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:15.475 [2024-07-15 08:29:07.528859] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:15.475 [2024-07-15 08:29:07.528959] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:15.733 [2024-07-15 08:29:07.669738] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:15.733 [2024-07-15 08:29:07.793890] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:15.733 [2024-07-15 08:29:07.793957] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:15.733 [2024-07-15 08:29:07.793972] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:15.733 [2024-07-15 08:29:07.793982] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:15.733 [2024-07-15 08:29:07.793992] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:15.733 [2024-07-15 08:29:07.794023] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:15.733 [2024-07-15 08:29:07.849910] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:16.298 08:29:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:16.298 08:29:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:16:16.298 08:29:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:16.298 08:29:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:16.298 08:29:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:16.556 08:29:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:16.556 08:29:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:16.556 08:29:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.556 08:29:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:16.556 [2024-07-15 08:29:08.489751] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:16.556 08:29:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.556 08:29:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:16:16.556 08:29:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.556 08:29:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:16.556 [2024-07-15 08:29:08.497860] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:16:16.556 08:29:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.556 08:29:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:16:16.556 08:29:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.556 08:29:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:16.556 null0 00:16:16.556 08:29:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.556 08:29:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:16:16.556 08:29:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.556 08:29:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:16.556 null1 00:16:16.556 08:29:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.556 08:29:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:16:16.556 08:29:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.556 08:29:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:16.556 08:29:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.557 08:29:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=76528 00:16:16.557 08:29:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 76528 /tmp/host.sock 00:16:16.557 08:29:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:16:16.557 08:29:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 76528 ']' 00:16:16.557 08:29:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:16:16.557 08:29:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:16.557 08:29:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:16:16.557 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:16:16.557 08:29:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:16.557 08:29:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:16.557 [2024-07-15 08:29:08.576249] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:16.557 [2024-07-15 08:29:08.576334] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76528 ] 00:16:16.557 [2024-07-15 08:29:08.708286] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:16.815 [2024-07-15 08:29:08.817212] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:16.815 [2024-07-15 08:29:08.871155] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:17.750 08:29:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:17.750 08:29:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:16:17.750 08:29:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:17.750 08:29:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:16:17.750 08:29:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.750 08:29:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:17.750 08:29:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.750 08:29:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:16:17.750 08:29:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.750 08:29:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:17.750 08:29:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.750 08:29:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:16:17.750 08:29:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:16:17.750 08:29:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:17.750 08:29:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.750 08:29:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:17.750 08:29:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:17.750 08:29:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:17.750 08:29:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:17.750 08:29:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.750 08:29:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:16:17.750 08:29:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:16:17.750 08:29:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:17.750 08:29:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:17.750 08:29:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.750 08:29:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:17.750 08:29:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:17.750 08:29:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:17.750 08:29:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.750 08:29:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:16:17.750 08:29:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:16:17.750 08:29:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.750 08:29:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:17.750 08:29:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.750 08:29:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:16:17.750 08:29:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:17.750 08:29:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.750 08:29:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:17.750 08:29:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:17.750 08:29:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:17.750 08:29:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:17.750 08:29:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.750 08:29:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:16:17.750 08:29:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:16:17.750 08:29:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:17.750 08:29:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:17.750 08:29:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.750 08:29:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:17.750 08:29:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:17.750 08:29:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:17.750 08:29:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.750 08:29:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:16:17.750 08:29:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:16:17.750 08:29:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.750 08:29:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:17.750 08:29:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.750 08:29:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:16:17.750 08:29:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:17.750 08:29:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:17.750 08:29:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.750 08:29:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:17.750 08:29:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:17.750 08:29:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:17.750 08:29:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.009 08:29:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:16:18.009 08:29:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:16:18.009 08:29:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:18.009 08:29:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.009 08:29:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:18.009 08:29:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:18.009 08:29:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:18.009 08:29:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:18.009 08:29:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.009 08:29:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:16:18.009 08:29:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:18.009 08:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.009 08:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:18.009 [2024-07-15 08:29:10.006402] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:18.009 08:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.009 08:29:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:16:18.009 08:29:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:18.009 08:29:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:18.009 08:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.009 08:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:18.009 08:29:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:18.009 08:29:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:18.009 08:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.009 08:29:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:16:18.009 08:29:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:16:18.009 08:29:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:18.009 08:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.009 08:29:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:18.009 08:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:18.009 08:29:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:18.009 08:29:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:18.009 08:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.009 08:29:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:16:18.009 08:29:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:16:18.009 08:29:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:16:18.009 08:29:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:18.009 08:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:18.009 08:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:18.009 08:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:18.009 08:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:18.009 08:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:16:18.009 08:29:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:18.009 08:29:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:16:18.009 08:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.009 08:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:18.009 08:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.009 08:29:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:16:18.009 08:29:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:16:18.010 08:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:16:18.010 08:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:18.010 08:29:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:16:18.010 08:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.010 08:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:18.268 08:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.268 08:29:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:18.268 08:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:18.268 08:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:18.268 08:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:18.268 08:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:18.268 08:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:16:18.268 08:29:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:18.268 08:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.268 08:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:18.268 08:29:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:18.268 08:29:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:18.268 08:29:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:18.268 08:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.268 08:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:16:18.268 08:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:16:18.526 [2024-07-15 08:29:10.622829] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:16:18.526 [2024-07-15 08:29:10.622876] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:16:18.527 [2024-07-15 08:29:10.622897] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:18.527 [2024-07-15 08:29:10.628882] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:16:18.527 [2024-07-15 08:29:10.686384] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:16:18.527 [2024-07-15 08:29:10.686449] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:16:19.093 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:19.093 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:19.093 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:16:19.093 08:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:19.093 08:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:19.093 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.093 08:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:19.093 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:19.093 08:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:19.093 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.418 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.418 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:19.419 08:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:16:19.419 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:16:19.419 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:19.419 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:19.419 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:16:19.419 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:16:19.419 08:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:19.419 08:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:19.419 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.419 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:19.419 08:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:19.419 08:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:19.419 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.419 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:16:19.419 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:19.419 08:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:16:19.419 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:16:19.419 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:19.419 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:19.419 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:16:19.419 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:16:19.419 08:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:19.419 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.419 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:19.419 08:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:19.419 08:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:16:19.419 08:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:16:19.419 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.419 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:16:19.419 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:19.419 08:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:16:19.419 08:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:16:19.419 08:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:19.419 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:19.419 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:19.419 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:19.419 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:19.419 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:16:19.419 08:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:19.419 08:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:16:19.419 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.419 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:19.419 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.419 08:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:16:19.419 08:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:16:19.419 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:16:19.419 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:19.419 08:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:16:19.419 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.419 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:19.419 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.419 08:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:19.419 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:19.419 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:19.419 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:19.419 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:16:19.419 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:16:19.419 08:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:19.419 08:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:19.419 08:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:19.419 08:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:19.419 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.419 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:19.419 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.419 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:19.419 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:19.419 08:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:16:19.419 08:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:16:19.419 08:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:19.419 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:19.419 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:19.419 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:19.419 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:19.419 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:16:19.419 08:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:16:19.419 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.419 08:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:19.419 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:19.419 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.419 08:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:16:19.419 08:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:16:19.419 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:16:19.419 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:19.419 08:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:16:19.419 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.419 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:19.419 [2024-07-15 08:29:11.587948] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:19.419 [2024-07-15 08:29:11.588600] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:16:19.419 [2024-07-15 08:29:11.588633] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:19.419 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.419 08:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:19.419 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:19.419 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:19.419 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:19.419 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:19.680 [2024-07-15 08:29:11.594585] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:16:19.680 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:16:19.680 08:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:19.680 08:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:19.680 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.680 08:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:19.680 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:19.680 08:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:19.680 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.680 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.680 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:19.680 08:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:19.680 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:19.680 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:19.680 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:19.680 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:16:19.680 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:16:19.680 08:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:19.680 08:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:19.680 08:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:19.680 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.680 08:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:19.680 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:19.680 [2024-07-15 08:29:11.656886] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:16:19.680 [2024-07-15 08:29:11.656922] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:16:19.680 [2024-07-15 08:29:11.656930] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:16:19.680 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.680 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:19.680 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:19.680 08:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:16:19.680 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:16:19.680 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:19.680 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:19.680 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:16:19.680 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:16:19.680 08:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:19.680 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.680 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:19.680 08:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:16:19.680 08:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:19.680 08:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:16:19.680 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.680 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:16:19.680 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:19.680 08:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:16:19.680 08:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:16:19.680 08:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:19.680 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:19.680 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:19.680 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:19.680 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:19.680 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:16:19.680 08:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:19.680 08:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:19.680 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.680 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:19.680 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.680 08:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:16:19.680 08:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:16:19.680 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:16:19.680 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:19.680 08:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:19.680 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.680 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:19.680 [2024-07-15 08:29:11.804798] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:16:19.680 [2024-07-15 08:29:11.804842] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:19.680 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.680 08:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:19.680 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:19.680 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:19.680 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:19.680 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:19.680 [2024-07-15 08:29:11.810793] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:16:19.680 [2024-07-15 08:29:11.810833] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:16:19.680 [2024-07-15 08:29:11.810949] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:19.680 [2024-07-15 08:29:11.810991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.680 [2024-07-15 08:29:11.811005] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:19.680 [2024-07-15 08:29:11.811014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.680 [2024-07-15 08:29:11.811024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:19.680 [2024-07-15 08:29:11.811034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.680 [2024-07-15 08:29:11.811043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:19.680 [2024-07-15 08:29:11.811053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.680 [2024-07-15 08:29:11.811062] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227c600 is same with the state(5) to be set 00:16:19.680 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:16:19.680 08:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:19.680 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.680 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:19.680 08:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:19.680 08:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:19.680 08:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:19.680 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.938 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.938 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:19.938 08:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:19.938 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:19.939 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:19.939 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:19.939 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:16:19.939 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:16:19.939 08:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:19.939 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.939 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:19.939 08:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:19.939 08:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:19.939 08:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:19.939 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.939 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:19.939 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:19.939 08:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:16:19.939 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:16:19.939 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:19.939 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:19.939 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:16:19.939 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:16:19.939 08:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:19.939 08:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:19.939 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.939 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:19.939 08:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:16:19.939 08:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:16:19.939 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.939 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:16:19.939 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:19.939 08:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:16:19.939 08:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:16:19.939 08:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:19.939 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:19.939 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:19.939 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:19.939 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:19.939 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:16:19.939 08:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:19.939 08:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:19.939 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.939 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:19.939 08:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.939 08:29:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:16:19.939 08:29:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:16:19.939 08:29:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:16:19.939 08:29:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:19.939 08:29:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:16:19.939 08:29:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.939 08:29:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:19.939 08:29:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.939 08:29:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:16:19.939 08:29:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:16:19.939 08:29:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:19.939 08:29:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:19.939 08:29:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:16:19.939 08:29:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:16:19.939 08:29:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:19.939 08:29:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:19.939 08:29:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:19.939 08:29:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:19.939 08:29:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.939 08:29:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:19.939 08:29:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.939 08:29:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:16:19.939 08:29:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:19.939 08:29:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:16:19.939 08:29:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:16:19.939 08:29:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:19.939 08:29:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:19.939 08:29:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:16:19.939 08:29:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:16:19.939 08:29:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:19.939 08:29:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:19.939 08:29:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.939 08:29:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:19.939 08:29:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:19.939 08:29:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:19.939 08:29:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.198 08:29:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:16:20.198 08:29:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:20.198 08:29:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:16:20.198 08:29:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:16:20.198 08:29:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:20.198 08:29:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:20.198 08:29:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:20.198 08:29:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:20.198 08:29:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:20.198 08:29:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:16:20.198 08:29:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:20.198 08:29:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:20.198 08:29:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.198 08:29:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:20.198 08:29:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.198 08:29:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:16:20.198 08:29:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:16:20.198 08:29:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:16:20.198 08:29:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:20.198 08:29:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:20.198 08:29:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.198 08:29:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:21.134 [2024-07-15 08:29:13.213545] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:16:21.134 [2024-07-15 08:29:13.213597] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:16:21.134 [2024-07-15 08:29:13.213618] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:21.134 [2024-07-15 08:29:13.219584] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:16:21.134 [2024-07-15 08:29:13.280302] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:16:21.134 [2024-07-15 08:29:13.280372] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:16:21.134 08:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.134 08:29:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:21.134 08:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:16:21.134 08:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:21.134 08:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:16:21.134 08:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:21.134 08:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:16:21.134 08:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:21.134 08:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:21.134 08:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.134 08:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:21.134 request: 00:16:21.134 { 00:16:21.134 "name": "nvme", 00:16:21.134 "trtype": "tcp", 00:16:21.134 "traddr": "10.0.0.2", 00:16:21.134 "adrfam": "ipv4", 00:16:21.134 "trsvcid": "8009", 00:16:21.134 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:21.134 "wait_for_attach": true, 00:16:21.134 "method": "bdev_nvme_start_discovery", 00:16:21.134 "req_id": 1 00:16:21.134 } 00:16:21.134 Got JSON-RPC error response 00:16:21.134 response: 00:16:21.134 { 00:16:21.134 "code": -17, 00:16:21.134 "message": "File exists" 00:16:21.134 } 00:16:21.134 08:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:21.134 08:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:16:21.135 08:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:21.135 08:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:21.135 08:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:21.135 08:29:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:16:21.135 08:29:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:21.135 08:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.135 08:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:21.135 08:29:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:21.135 08:29:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:16:21.135 08:29:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:16:21.392 08:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.392 08:29:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:16:21.392 08:29:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:16:21.392 08:29:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:21.392 08:29:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:21.392 08:29:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:21.392 08:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.392 08:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:21.392 08:29:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:21.392 08:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.392 08:29:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:21.392 08:29:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:21.392 08:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:16:21.392 08:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:21.392 08:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:16:21.392 08:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:21.392 08:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:16:21.392 08:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:21.392 08:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:21.392 08:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.392 08:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:21.392 request: 00:16:21.392 { 00:16:21.392 "name": "nvme_second", 00:16:21.392 "trtype": "tcp", 00:16:21.392 "traddr": "10.0.0.2", 00:16:21.392 "adrfam": "ipv4", 00:16:21.392 "trsvcid": "8009", 00:16:21.392 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:21.392 "wait_for_attach": true, 00:16:21.392 "method": "bdev_nvme_start_discovery", 00:16:21.392 "req_id": 1 00:16:21.392 } 00:16:21.392 Got JSON-RPC error response 00:16:21.392 response: 00:16:21.392 { 00:16:21.392 "code": -17, 00:16:21.392 "message": "File exists" 00:16:21.392 } 00:16:21.392 08:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:21.392 08:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:16:21.392 08:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:21.392 08:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:21.392 08:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:21.392 08:29:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:16:21.392 08:29:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:21.392 08:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.392 08:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:21.392 08:29:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:21.392 08:29:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:16:21.392 08:29:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:16:21.392 08:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.392 08:29:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:16:21.392 08:29:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:16:21.392 08:29:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:21.392 08:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.392 08:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:21.392 08:29:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:21.392 08:29:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:21.392 08:29:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:21.392 08:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.392 08:29:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:21.392 08:29:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:21.392 08:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:16:21.392 08:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:21.392 08:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:16:21.392 08:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:21.392 08:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:16:21.392 08:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:21.392 08:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:21.392 08:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.392 08:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:22.763 [2024-07-15 08:29:14.557069] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:22.763 [2024-07-15 08:29:14.557160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2296f70 with addr=10.0.0.2, port=8010 00:16:22.763 [2024-07-15 08:29:14.557186] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:16:22.763 [2024-07-15 08:29:14.557197] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:16:22.763 [2024-07-15 08:29:14.557206] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:16:23.697 [2024-07-15 08:29:15.557086] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:23.697 [2024-07-15 08:29:15.557161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2296f70 with addr=10.0.0.2, port=8010 00:16:23.697 [2024-07-15 08:29:15.557187] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:16:23.697 [2024-07-15 08:29:15.557199] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:16:23.697 [2024-07-15 08:29:15.557208] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:16:24.632 [2024-07-15 08:29:16.556912] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:16:24.632 request: 00:16:24.632 { 00:16:24.632 "name": "nvme_second", 00:16:24.632 "trtype": "tcp", 00:16:24.632 "traddr": "10.0.0.2", 00:16:24.632 "adrfam": "ipv4", 00:16:24.632 "trsvcid": "8010", 00:16:24.632 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:24.632 "wait_for_attach": false, 00:16:24.632 "attach_timeout_ms": 3000, 00:16:24.632 "method": "bdev_nvme_start_discovery", 00:16:24.632 "req_id": 1 00:16:24.632 } 00:16:24.632 Got JSON-RPC error response 00:16:24.632 response: 00:16:24.632 { 00:16:24.632 "code": -110, 00:16:24.632 "message": "Connection timed out" 00:16:24.632 } 00:16:24.632 08:29:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:24.632 08:29:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:16:24.632 08:29:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:24.632 08:29:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:24.632 08:29:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:24.632 08:29:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:16:24.632 08:29:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:24.632 08:29:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.632 08:29:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:24.632 08:29:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:24.632 08:29:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:16:24.632 08:29:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:16:24.632 08:29:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.632 08:29:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:16:24.632 08:29:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:16:24.632 08:29:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 76528 00:16:24.632 08:29:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:16:24.632 08:29:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:24.632 08:29:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:16:24.632 08:29:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:24.632 08:29:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:16:24.632 08:29:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:24.632 08:29:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:24.632 rmmod nvme_tcp 00:16:24.632 rmmod nvme_fabrics 00:16:24.632 rmmod nvme_keyring 00:16:24.632 08:29:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:24.632 08:29:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:16:24.632 08:29:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:16:24.632 08:29:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 76496 ']' 00:16:24.632 08:29:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 76496 00:16:24.632 08:29:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 76496 ']' 00:16:24.632 08:29:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 76496 00:16:24.632 08:29:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:16:24.632 08:29:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:24.632 08:29:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76496 00:16:24.632 08:29:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:24.632 08:29:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:24.632 08:29:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76496' 00:16:24.632 killing process with pid 76496 00:16:24.632 08:29:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 76496 00:16:24.632 08:29:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 76496 00:16:24.891 08:29:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:24.891 08:29:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:24.891 08:29:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:24.891 08:29:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:24.891 08:29:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:24.891 08:29:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:24.891 08:29:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:24.891 08:29:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:24.891 08:29:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:24.891 00:16:24.891 real 0m10.046s 00:16:24.891 user 0m19.389s 00:16:24.891 sys 0m1.931s 00:16:24.891 08:29:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:24.891 08:29:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:24.891 ************************************ 00:16:24.891 END TEST nvmf_host_discovery 00:16:24.891 ************************************ 00:16:25.150 08:29:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:25.150 08:29:17 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:16:25.150 08:29:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:25.150 08:29:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:25.150 08:29:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:25.150 ************************************ 00:16:25.150 START TEST nvmf_host_multipath_status 00:16:25.150 ************************************ 00:16:25.150 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:16:25.150 * Looking for test storage... 00:16:25.150 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:25.150 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:25.150 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:16:25.150 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:25.150 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:25.150 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:25.150 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:25.150 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:25.150 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:25.150 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:25.150 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:25.150 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:25.150 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:25.150 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:16:25.150 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:16:25.150 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:25.150 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:25.150 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:25.150 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:25.150 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:25.150 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:25.150 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:25.150 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:25.150 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.150 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.150 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.150 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:16:25.150 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.150 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:16:25.150 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:25.150 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:25.150 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:25.150 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:25.150 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:25.150 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:25.150 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:25.150 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:25.150 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:25.150 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:25.150 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:25.150 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:16:25.150 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:25.150 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:16:25.150 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:16:25.150 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:25.150 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:25.150 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:25.150 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:25.150 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:25.150 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:25.150 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:25.150 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:25.150 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:25.150 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:25.150 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:25.150 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:25.150 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:25.150 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:25.150 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:25.150 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:25.150 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:25.150 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:25.150 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:25.150 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:25.150 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:25.150 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:25.150 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:25.150 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:25.150 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:25.150 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:25.150 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:25.150 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:25.150 Cannot find device "nvmf_tgt_br" 00:16:25.150 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # true 00:16:25.150 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:25.150 Cannot find device "nvmf_tgt_br2" 00:16:25.150 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # true 00:16:25.150 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:25.150 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:25.150 Cannot find device "nvmf_tgt_br" 00:16:25.150 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # true 00:16:25.151 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:25.151 Cannot find device "nvmf_tgt_br2" 00:16:25.151 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # true 00:16:25.151 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:25.151 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:25.409 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:25.409 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:25.409 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:16:25.409 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:25.409 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:25.409 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:16:25.409 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:25.409 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:25.409 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:25.409 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:25.409 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:25.409 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:25.409 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:25.409 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:25.409 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:25.409 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:25.409 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:25.409 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:25.409 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:25.409 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:25.409 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:25.409 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:25.409 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:25.409 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:25.409 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:25.409 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:25.409 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:25.409 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:25.409 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:25.409 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:25.410 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:25.410 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:16:25.410 00:16:25.410 --- 10.0.0.2 ping statistics --- 00:16:25.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:25.410 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:16:25.410 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:25.410 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:25.410 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:16:25.410 00:16:25.410 --- 10.0.0.3 ping statistics --- 00:16:25.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:25.410 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:16:25.410 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:25.410 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:25.410 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:16:25.410 00:16:25.410 --- 10.0.0.1 ping statistics --- 00:16:25.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:25.410 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:16:25.410 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:25.410 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@433 -- # return 0 00:16:25.410 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:25.410 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:25.410 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:25.410 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:25.410 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:25.410 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:25.410 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:25.410 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:16:25.410 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:25.410 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:25.410 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:25.410 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=76979 00:16:25.410 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:16:25.410 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 76979 00:16:25.410 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 76979 ']' 00:16:25.410 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:25.410 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:25.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:25.410 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:25.410 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:25.410 08:29:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:25.668 [2024-07-15 08:29:17.625277] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:25.668 [2024-07-15 08:29:17.625401] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:25.668 [2024-07-15 08:29:17.770860] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:25.926 [2024-07-15 08:29:17.889955] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:25.926 [2024-07-15 08:29:17.890024] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:25.926 [2024-07-15 08:29:17.890046] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:25.926 [2024-07-15 08:29:17.890055] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:25.926 [2024-07-15 08:29:17.890062] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:25.926 [2024-07-15 08:29:17.890144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:25.926 [2024-07-15 08:29:17.890155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:25.926 [2024-07-15 08:29:17.941678] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:26.491 08:29:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:26.491 08:29:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:16:26.491 08:29:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:26.491 08:29:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:26.491 08:29:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:26.491 08:29:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:26.491 08:29:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=76979 00:16:26.491 08:29:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:26.748 [2024-07-15 08:29:18.811207] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:26.748 08:29:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:16:27.006 Malloc0 00:16:27.006 08:29:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:16:27.264 08:29:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:27.523 08:29:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:27.781 [2024-07-15 08:29:19.795310] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:27.781 08:29:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:28.040 [2024-07-15 08:29:20.035425] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:28.040 08:29:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=77030 00:16:28.040 08:29:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:16:28.040 08:29:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:28.040 08:29:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 77030 /var/tmp/bdevperf.sock 00:16:28.040 08:29:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 77030 ']' 00:16:28.040 08:29:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:28.040 08:29:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:28.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:28.040 08:29:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:28.040 08:29:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:28.040 08:29:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:28.975 08:29:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:28.975 08:29:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:16:28.975 08:29:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:16:29.233 08:29:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:16:29.492 Nvme0n1 00:16:29.492 08:29:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:16:29.750 Nvme0n1 00:16:29.750 08:29:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:16:29.750 08:29:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:16:32.279 08:29:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:16:32.279 08:29:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:16:32.279 08:29:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:16:32.279 08:29:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:16:33.653 08:29:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:16:33.653 08:29:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:33.653 08:29:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:33.653 08:29:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:33.653 08:29:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:33.653 08:29:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:33.653 08:29:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:33.653 08:29:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:33.910 08:29:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:33.910 08:29:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:33.910 08:29:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:33.910 08:29:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:34.214 08:29:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:34.214 08:29:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:34.214 08:29:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:34.214 08:29:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:34.483 08:29:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:34.483 08:29:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:34.483 08:29:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:34.483 08:29:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:34.483 08:29:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:34.483 08:29:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:34.483 08:29:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:34.483 08:29:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:34.741 08:29:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:34.741 08:29:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:16:34.741 08:29:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:35.308 08:29:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:16:35.308 08:29:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:16:36.692 08:29:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:16:36.692 08:29:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:36.692 08:29:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:36.692 08:29:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:36.692 08:29:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:36.692 08:29:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:36.692 08:29:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:36.692 08:29:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:36.958 08:29:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:36.958 08:29:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:36.958 08:29:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:36.958 08:29:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:37.255 08:29:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:37.255 08:29:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:37.255 08:29:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:37.255 08:29:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:37.255 08:29:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:37.255 08:29:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:37.256 08:29:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:37.256 08:29:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:37.527 08:29:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:37.527 08:29:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:37.527 08:29:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:37.528 08:29:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:37.799 08:29:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:37.799 08:29:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:16:37.799 08:29:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:38.073 08:29:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:16:38.335 08:29:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:16:39.270 08:29:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:16:39.270 08:29:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:39.270 08:29:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:39.270 08:29:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:39.529 08:29:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:39.529 08:29:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:39.529 08:29:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:39.529 08:29:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:39.788 08:29:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:39.788 08:29:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:39.788 08:29:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:39.788 08:29:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:40.046 08:29:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:40.046 08:29:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:40.046 08:29:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:40.046 08:29:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:40.305 08:29:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:40.305 08:29:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:40.305 08:29:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:40.305 08:29:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:40.563 08:29:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:40.563 08:29:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:40.563 08:29:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:40.563 08:29:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:40.821 08:29:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:40.821 08:29:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:16:40.821 08:29:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:41.078 08:29:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:16:41.336 08:29:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:16:42.714 08:29:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:16:42.714 08:29:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:42.714 08:29:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:42.714 08:29:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:42.714 08:29:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:42.714 08:29:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:42.714 08:29:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:42.714 08:29:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:42.973 08:29:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:42.973 08:29:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:42.973 08:29:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:42.973 08:29:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:43.231 08:29:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:43.231 08:29:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:43.231 08:29:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:43.231 08:29:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:43.490 08:29:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:43.490 08:29:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:43.490 08:29:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:43.490 08:29:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:43.748 08:29:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:43.748 08:29:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:16:43.748 08:29:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:43.748 08:29:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:44.006 08:29:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:44.006 08:29:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:16:44.006 08:29:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:16:44.263 08:29:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:16:44.521 08:29:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:16:45.480 08:29:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:16:45.480 08:29:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:45.480 08:29:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:45.480 08:29:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:45.742 08:29:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:45.742 08:29:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:45.742 08:29:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:45.742 08:29:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:46.001 08:29:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:46.001 08:29:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:46.001 08:29:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:46.001 08:29:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:46.259 08:29:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:46.259 08:29:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:46.259 08:29:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:46.259 08:29:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:46.517 08:29:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:46.517 08:29:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:16:46.517 08:29:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:46.517 08:29:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:46.775 08:29:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:46.775 08:29:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:16:46.775 08:29:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:46.775 08:29:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:47.034 08:29:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:47.034 08:29:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:16:47.034 08:29:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:16:47.293 08:29:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:16:47.552 08:29:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:16:48.489 08:29:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:16:48.489 08:29:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:48.489 08:29:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:48.489 08:29:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:48.748 08:29:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:48.748 08:29:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:48.748 08:29:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:48.748 08:29:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:49.006 08:29:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:49.007 08:29:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:49.007 08:29:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:49.007 08:29:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:49.265 08:29:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:49.265 08:29:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:49.265 08:29:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:49.265 08:29:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:49.523 08:29:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:49.523 08:29:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:16:49.523 08:29:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:49.523 08:29:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:49.781 08:29:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:49.781 08:29:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:50.040 08:29:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:50.040 08:29:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:50.040 08:29:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:50.040 08:29:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:16:50.610 08:29:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:16:50.610 08:29:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:16:50.610 08:29:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:16:50.870 08:29:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:16:52.244 08:29:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:16:52.244 08:29:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:52.244 08:29:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:52.244 08:29:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:52.244 08:29:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:52.244 08:29:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:52.244 08:29:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:52.244 08:29:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:52.504 08:29:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:52.504 08:29:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:52.504 08:29:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:52.504 08:29:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:52.763 08:29:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:52.763 08:29:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:52.763 08:29:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:52.763 08:29:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:53.021 08:29:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:53.021 08:29:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:53.021 08:29:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:53.021 08:29:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:53.279 08:29:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:53.279 08:29:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:53.279 08:29:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:53.279 08:29:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:53.538 08:29:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:53.538 08:29:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:16:53.538 08:29:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:53.797 08:29:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:16:54.118 08:29:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:16:55.081 08:29:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:16:55.081 08:29:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:55.081 08:29:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:55.081 08:29:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:55.338 08:29:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:55.338 08:29:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:55.339 08:29:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:55.339 08:29:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:55.597 08:29:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:55.597 08:29:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:55.597 08:29:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:55.597 08:29:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:55.854 08:29:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:55.854 08:29:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:55.854 08:29:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:55.854 08:29:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:56.112 08:29:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:56.112 08:29:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:56.112 08:29:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:56.112 08:29:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:56.370 08:29:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:56.370 08:29:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:56.370 08:29:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:56.370 08:29:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:56.628 08:29:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:56.628 08:29:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:16:56.628 08:29:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:56.884 08:29:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:16:57.142 08:29:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:16:58.515 08:29:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:16:58.515 08:29:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:58.515 08:29:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:58.515 08:29:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:58.515 08:29:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:58.515 08:29:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:58.515 08:29:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:58.515 08:29:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:58.774 08:29:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:58.774 08:29:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:58.774 08:29:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:58.774 08:29:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:59.032 08:29:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:59.032 08:29:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:59.032 08:29:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:59.032 08:29:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:59.291 08:29:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:59.291 08:29:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:59.291 08:29:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:59.291 08:29:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:59.550 08:29:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:59.550 08:29:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:59.550 08:29:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:59.550 08:29:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:00.115 08:29:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:00.115 08:29:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:17:00.115 08:29:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:00.115 08:29:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:17:00.373 08:29:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:17:01.755 08:29:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:17:01.755 08:29:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:01.755 08:29:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:01.755 08:29:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:01.755 08:29:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:01.755 08:29:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:17:01.755 08:29:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:01.755 08:29:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:02.014 08:29:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:02.014 08:29:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:02.014 08:29:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:02.014 08:29:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:02.273 08:29:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:02.273 08:29:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:02.273 08:29:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:02.273 08:29:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:02.533 08:29:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:02.533 08:29:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:02.533 08:29:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:02.533 08:29:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:02.791 08:29:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:02.791 08:29:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:17:02.791 08:29:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:02.791 08:29:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:03.050 08:29:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:03.050 08:29:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 77030 00:17:03.050 08:29:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 77030 ']' 00:17:03.050 08:29:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 77030 00:17:03.050 08:29:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:17:03.050 08:29:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:03.050 08:29:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77030 00:17:03.050 killing process with pid 77030 00:17:03.050 08:29:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:03.050 08:29:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:03.050 08:29:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77030' 00:17:03.050 08:29:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 77030 00:17:03.050 08:29:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 77030 00:17:03.312 Connection closed with partial response: 00:17:03.312 00:17:03.312 00:17:03.312 08:29:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 77030 00:17:03.312 08:29:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:03.312 [2024-07-15 08:29:20.118068] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:03.312 [2024-07-15 08:29:20.118225] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77030 ] 00:17:03.312 [2024-07-15 08:29:20.263921] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:03.312 [2024-07-15 08:29:20.388633] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:03.312 [2024-07-15 08:29:20.444980] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:03.312 Running I/O for 90 seconds... 00:17:03.312 [2024-07-15 08:29:36.191098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.312 [2024-07-15 08:29:36.191196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:03.312 [2024-07-15 08:29:36.191282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.312 [2024-07-15 08:29:36.191331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:03.312 [2024-07-15 08:29:36.191363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.312 [2024-07-15 08:29:36.191379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:03.312 [2024-07-15 08:29:36.191401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.312 [2024-07-15 08:29:36.191415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:03.312 [2024-07-15 08:29:36.191436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.312 [2024-07-15 08:29:36.191451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:03.312 [2024-07-15 08:29:36.191472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:1000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.312 [2024-07-15 08:29:36.191487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:03.312 [2024-07-15 08:29:36.191508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:1008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.312 [2024-07-15 08:29:36.191523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:03.312 [2024-07-15 08:29:36.191544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:1016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.312 [2024-07-15 08:29:36.191558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:03.312 [2024-07-15 08:29:36.191579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.312 [2024-07-15 08:29:36.191594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:03.312 [2024-07-15 08:29:36.191616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.312 [2024-07-15 08:29:36.191645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:03.312 [2024-07-15 08:29:36.191680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.312 [2024-07-15 08:29:36.191716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:03.312 [2024-07-15 08:29:36.191738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.312 [2024-07-15 08:29:36.191752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:03.312 [2024-07-15 08:29:36.191786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.312 [2024-07-15 08:29:36.191803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:03.312 [2024-07-15 08:29:36.191827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.312 [2024-07-15 08:29:36.191840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:03.312 [2024-07-15 08:29:36.191860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.312 [2024-07-15 08:29:36.191885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:03.312 [2024-07-15 08:29:36.191905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.312 [2024-07-15 08:29:36.191919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:03.312 [2024-07-15 08:29:36.191953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.312 [2024-07-15 08:29:36.191970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:03.312 [2024-07-15 08:29:36.191992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.312 [2024-07-15 08:29:36.192007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:03.312 [2024-07-15 08:29:36.192027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.312 [2024-07-15 08:29:36.192041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:03.312 [2024-07-15 08:29:36.192062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.312 [2024-07-15 08:29:36.192076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:03.312 [2024-07-15 08:29:36.192096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.312 [2024-07-15 08:29:36.192110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:03.312 [2024-07-15 08:29:36.192130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.312 [2024-07-15 08:29:36.192144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:03.312 [2024-07-15 08:29:36.192165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.312 [2024-07-15 08:29:36.192206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:03.312 [2024-07-15 08:29:36.192228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.312 [2024-07-15 08:29:36.192242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:03.312 [2024-07-15 08:29:36.192262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.312 [2024-07-15 08:29:36.192276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:03.312 [2024-07-15 08:29:36.192296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.312 [2024-07-15 08:29:36.192310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.312 [2024-07-15 08:29:36.192330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.312 [2024-07-15 08:29:36.192343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:03.312 [2024-07-15 08:29:36.192363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.312 [2024-07-15 08:29:36.192377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:03.312 [2024-07-15 08:29:36.192397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.312 [2024-07-15 08:29:36.192411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:03.312 [2024-07-15 08:29:36.192431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.312 [2024-07-15 08:29:36.192445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:03.312 [2024-07-15 08:29:36.192465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.313 [2024-07-15 08:29:36.192479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:03.313 [2024-07-15 08:29:36.192499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.313 [2024-07-15 08:29:36.192513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:03.313 [2024-07-15 08:29:36.192539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.313 [2024-07-15 08:29:36.192556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:03.313 [2024-07-15 08:29:36.192577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:1032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.313 [2024-07-15 08:29:36.192591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:03.313 [2024-07-15 08:29:36.192611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:1040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.313 [2024-07-15 08:29:36.192625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:03.313 [2024-07-15 08:29:36.192670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:1048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.313 [2024-07-15 08:29:36.192686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:03.313 [2024-07-15 08:29:36.192706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:1056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.313 [2024-07-15 08:29:36.192721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:03.313 [2024-07-15 08:29:36.192741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:1064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.313 [2024-07-15 08:29:36.192783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:03.313 [2024-07-15 08:29:36.192806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.313 [2024-07-15 08:29:36.192821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:03.313 [2024-07-15 08:29:36.192843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:1080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.313 [2024-07-15 08:29:36.192857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:03.313 [2024-07-15 08:29:36.192878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.313 [2024-07-15 08:29:36.192893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:03.313 [2024-07-15 08:29:36.192915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.313 [2024-07-15 08:29:36.192930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:03.313 [2024-07-15 08:29:36.192953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.313 [2024-07-15 08:29:36.192968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:03.313 [2024-07-15 08:29:36.192989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.313 [2024-07-15 08:29:36.193004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:03.313 [2024-07-15 08:29:36.193025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.313 [2024-07-15 08:29:36.193039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:03.313 [2024-07-15 08:29:36.193060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.313 [2024-07-15 08:29:36.193075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:03.313 [2024-07-15 08:29:36.193096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.313 [2024-07-15 08:29:36.193111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:03.313 [2024-07-15 08:29:36.193141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.313 [2024-07-15 08:29:36.193157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:03.313 [2024-07-15 08:29:36.193178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.313 [2024-07-15 08:29:36.193193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:03.313 [2024-07-15 08:29:36.193214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.313 [2024-07-15 08:29:36.193230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:03.313 [2024-07-15 08:29:36.193251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.313 [2024-07-15 08:29:36.193266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:03.313 [2024-07-15 08:29:36.193288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.313 [2024-07-15 08:29:36.193303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:03.313 [2024-07-15 08:29:36.193330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.313 [2024-07-15 08:29:36.193344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:03.313 [2024-07-15 08:29:36.193366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.313 [2024-07-15 08:29:36.193380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:03.313 [2024-07-15 08:29:36.193401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.313 [2024-07-15 08:29:36.193416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:03.313 [2024-07-15 08:29:36.193437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.313 [2024-07-15 08:29:36.193452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:03.313 [2024-07-15 08:29:36.193473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:1088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.313 [2024-07-15 08:29:36.193488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:03.313 [2024-07-15 08:29:36.193509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:1096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.313 [2024-07-15 08:29:36.193523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.313 [2024-07-15 08:29:36.193545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:1104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.313 [2024-07-15 08:29:36.193559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:03.313 [2024-07-15 08:29:36.193580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.313 [2024-07-15 08:29:36.193602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:03.313 [2024-07-15 08:29:36.193624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:1120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.313 [2024-07-15 08:29:36.193640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:03.313 [2024-07-15 08:29:36.193676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:1128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.313 [2024-07-15 08:29:36.193690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:03.313 [2024-07-15 08:29:36.193710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.313 [2024-07-15 08:29:36.193724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:03.313 [2024-07-15 08:29:36.193745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:1144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.313 [2024-07-15 08:29:36.193771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:03.313 [2024-07-15 08:29:36.193793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:1152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.313 [2024-07-15 08:29:36.193808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:03.313 [2024-07-15 08:29:36.193833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.313 [2024-07-15 08:29:36.193848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:03.313 [2024-07-15 08:29:36.193884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.313 [2024-07-15 08:29:36.193899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:03.313 [2024-07-15 08:29:36.193929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:1176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.313 [2024-07-15 08:29:36.193945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:03.313 [2024-07-15 08:29:36.193966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:1184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.313 [2024-07-15 08:29:36.193981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:03.313 [2024-07-15 08:29:36.194002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:1192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.313 [2024-07-15 08:29:36.194017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:03.313 [2024-07-15 08:29:36.194039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:1200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.313 [2024-07-15 08:29:36.194053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:03.313 [2024-07-15 08:29:36.194074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:1208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.313 [2024-07-15 08:29:36.194089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:03.313 [2024-07-15 08:29:36.194119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.314 [2024-07-15 08:29:36.194135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:03.314 [2024-07-15 08:29:36.194156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.314 [2024-07-15 08:29:36.194171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:03.314 [2024-07-15 08:29:36.194192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.314 [2024-07-15 08:29:36.194208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:03.314 [2024-07-15 08:29:36.194229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.314 [2024-07-15 08:29:36.194244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:03.314 [2024-07-15 08:29:36.194265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.314 [2024-07-15 08:29:36.194281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:03.314 [2024-07-15 08:29:36.194303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.314 [2024-07-15 08:29:36.194317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:03.314 [2024-07-15 08:29:36.194344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.314 [2024-07-15 08:29:36.194359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:03.314 [2024-07-15 08:29:36.194380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.314 [2024-07-15 08:29:36.194395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:03.314 [2024-07-15 08:29:36.194421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:1216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.314 [2024-07-15 08:29:36.194437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:03.314 [2024-07-15 08:29:36.194459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:1224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.314 [2024-07-15 08:29:36.194473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:03.314 [2024-07-15 08:29:36.194494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.314 [2024-07-15 08:29:36.194509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:03.314 [2024-07-15 08:29:36.194535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.314 [2024-07-15 08:29:36.194550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:03.314 [2024-07-15 08:29:36.194579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.314 [2024-07-15 08:29:36.194595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:03.314 [2024-07-15 08:29:36.194616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:1256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.314 [2024-07-15 08:29:36.194631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:03.314 [2024-07-15 08:29:36.194652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:1264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.314 [2024-07-15 08:29:36.194667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:03.314 [2024-07-15 08:29:36.194689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:1272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.314 [2024-07-15 08:29:36.194703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:03.314 [2024-07-15 08:29:36.194724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:1280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.314 [2024-07-15 08:29:36.194750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:03.314 [2024-07-15 08:29:36.194773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:1288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.314 [2024-07-15 08:29:36.194789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.314 [2024-07-15 08:29:36.194810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.314 [2024-07-15 08:29:36.194825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:03.314 [2024-07-15 08:29:36.194846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:1304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.314 [2024-07-15 08:29:36.194861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:03.314 [2024-07-15 08:29:36.194882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.314 [2024-07-15 08:29:36.194897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:03.314 [2024-07-15 08:29:36.194918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:1320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.314 [2024-07-15 08:29:36.194933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:03.314 [2024-07-15 08:29:36.194956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:1328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.314 [2024-07-15 08:29:36.194971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:03.314 [2024-07-15 08:29:36.194992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:1336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.314 [2024-07-15 08:29:36.195006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:03.314 [2024-07-15 08:29:36.195027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:1344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.314 [2024-07-15 08:29:36.195050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:03.314 [2024-07-15 08:29:36.195072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.314 [2024-07-15 08:29:36.195088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:03.314 [2024-07-15 08:29:36.195109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.314 [2024-07-15 08:29:36.195124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:03.314 [2024-07-15 08:29:36.195150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:1368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.314 [2024-07-15 08:29:36.195166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:03.314 [2024-07-15 08:29:36.195187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.314 [2024-07-15 08:29:36.195202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:03.314 [2024-07-15 08:29:36.195223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.314 [2024-07-15 08:29:36.195253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:03.314 [2024-07-15 08:29:36.195274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.314 [2024-07-15 08:29:36.195289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:03.314 [2024-07-15 08:29:36.195338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.314 [2024-07-15 08:29:36.195354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:03.314 [2024-07-15 08:29:36.195375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.314 [2024-07-15 08:29:36.195390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:03.314 [2024-07-15 08:29:36.195420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.314 [2024-07-15 08:29:36.195435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:03.314 [2024-07-15 08:29:36.195456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.314 [2024-07-15 08:29:36.195471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:03.314 [2024-07-15 08:29:36.195492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.314 [2024-07-15 08:29:36.195507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:03.314 [2024-07-15 08:29:36.195528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.314 [2024-07-15 08:29:36.195551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:03.314 [2024-07-15 08:29:36.195573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.314 [2024-07-15 08:29:36.195589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:03.314 [2024-07-15 08:29:36.195610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.314 [2024-07-15 08:29:36.195624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:03.314 [2024-07-15 08:29:36.195660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.314 [2024-07-15 08:29:36.195675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:03.314 [2024-07-15 08:29:36.195695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.314 [2024-07-15 08:29:36.195709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:03.314 [2024-07-15 08:29:36.195730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.314 [2024-07-15 08:29:36.195744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:03.314 [2024-07-15 08:29:36.195803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.315 [2024-07-15 08:29:36.195819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:03.315 [2024-07-15 08:29:36.196594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.315 [2024-07-15 08:29:36.196621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:03.315 [2024-07-15 08:29:36.196656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:1376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.315 [2024-07-15 08:29:36.196674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:03.315 [2024-07-15 08:29:36.196704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:1384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.315 [2024-07-15 08:29:36.196719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:03.315 [2024-07-15 08:29:36.196749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.315 [2024-07-15 08:29:36.196780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:03.315 [2024-07-15 08:29:36.196814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:1400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.315 [2024-07-15 08:29:36.196830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:03.315 [2024-07-15 08:29:36.196860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:1408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.315 [2024-07-15 08:29:36.196875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.315 [2024-07-15 08:29:36.196923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.315 [2024-07-15 08:29:36.196941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.315 [2024-07-15 08:29:36.196972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:1424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.315 [2024-07-15 08:29:36.196988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:03.315 [2024-07-15 08:29:36.197033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:1432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.315 [2024-07-15 08:29:36.197054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:03.315 [2024-07-15 08:29:36.197085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:1440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.315 [2024-07-15 08:29:36.197101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:03.315 [2024-07-15 08:29:36.197131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.315 [2024-07-15 08:29:36.197146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:03.315 [2024-07-15 08:29:36.197176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:1456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.315 [2024-07-15 08:29:36.197191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:03.315 [2024-07-15 08:29:36.197221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:1464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.315 [2024-07-15 08:29:36.197238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:03.315 [2024-07-15 08:29:52.513077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:46128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.315 [2024-07-15 08:29:52.513184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:03.315 [2024-07-15 08:29:52.513243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:46144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.315 [2024-07-15 08:29:52.513264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:03.315 [2024-07-15 08:29:52.513286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:46160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.315 [2024-07-15 08:29:52.513301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:03.315 [2024-07-15 08:29:52.513322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:46176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.315 [2024-07-15 08:29:52.513336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:03.315 [2024-07-15 08:29:52.513357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:46192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.315 [2024-07-15 08:29:52.513371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:03.315 [2024-07-15 08:29:52.513422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:46208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.315 [2024-07-15 08:29:52.513438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:03.315 [2024-07-15 08:29:52.513458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:46224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.315 [2024-07-15 08:29:52.513472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:03.315 [2024-07-15 08:29:52.513492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:46240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.315 [2024-07-15 08:29:52.513506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:03.315 [2024-07-15 08:29:52.513527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:46256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.315 [2024-07-15 08:29:52.513541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:03.315 [2024-07-15 08:29:52.513561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:45896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.315 [2024-07-15 08:29:52.513575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:03.315 [2024-07-15 08:29:52.513595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:45920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.315 [2024-07-15 08:29:52.513609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:03.315 [2024-07-15 08:29:52.513646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:45952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.315 [2024-07-15 08:29:52.513661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:03.315 [2024-07-15 08:29:52.513681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:46280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.315 [2024-07-15 08:29:52.513696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:03.315 [2024-07-15 08:29:52.513716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:46296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.315 [2024-07-15 08:29:52.513731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:03.315 [2024-07-15 08:29:52.513765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:46312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.315 [2024-07-15 08:29:52.513782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:03.315 [2024-07-15 08:29:52.513803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:46328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.315 [2024-07-15 08:29:52.513823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:03.315 [2024-07-15 08:29:52.513844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:46344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.315 [2024-07-15 08:29:52.513859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:03.315 [2024-07-15 08:29:52.513883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:46360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.315 [2024-07-15 08:29:52.513909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:03.315 [2024-07-15 08:29:52.513931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:46376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.315 [2024-07-15 08:29:52.513947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:03.315 [2024-07-15 08:29:52.513968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:46392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.315 [2024-07-15 08:29:52.513982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:03.315 [2024-07-15 08:29:52.514004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:46408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.315 [2024-07-15 08:29:52.514018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.315 [2024-07-15 08:29:52.514040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:46424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.315 [2024-07-15 08:29:52.514054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:03.315 [2024-07-15 08:29:52.514090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:46440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.315 [2024-07-15 08:29:52.514105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:03.315 [2024-07-15 08:29:52.514125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:45984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.315 [2024-07-15 08:29:52.514139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:03.315 [2024-07-15 08:29:52.514160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:46016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.315 [2024-07-15 08:29:52.514174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:03.315 [2024-07-15 08:29:52.514195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:46048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.315 [2024-07-15 08:29:52.514209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:03.315 [2024-07-15 08:29:52.514230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:46080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.315 [2024-07-15 08:29:52.514244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:03.315 [2024-07-15 08:29:52.514264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:46104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.316 [2024-07-15 08:29:52.514278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:03.316 [2024-07-15 08:29:52.514298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:45960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.316 [2024-07-15 08:29:52.514313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:03.316 [2024-07-15 08:29:52.514333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:45992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.316 [2024-07-15 08:29:52.514354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:03.316 [2024-07-15 08:29:52.514376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:46024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.316 [2024-07-15 08:29:52.514390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:03.316 [2024-07-15 08:29:52.514411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:46056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.316 [2024-07-15 08:29:52.514425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:03.316 [2024-07-15 08:29:52.514447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:46088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.316 [2024-07-15 08:29:52.514462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:03.316 [2024-07-15 08:29:52.514503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:46456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.316 [2024-07-15 08:29:52.514523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:03.316 [2024-07-15 08:29:52.514545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:46472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.316 [2024-07-15 08:29:52.514559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:03.316 [2024-07-15 08:29:52.514580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:46488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.316 [2024-07-15 08:29:52.514594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:03.316 [2024-07-15 08:29:52.514614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:46504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.316 [2024-07-15 08:29:52.514629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:03.316 [2024-07-15 08:29:52.514649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:46520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.316 [2024-07-15 08:29:52.514663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:03.316 [2024-07-15 08:29:52.514700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:46536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.316 [2024-07-15 08:29:52.514715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:03.316 [2024-07-15 08:29:52.514737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:46552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.316 [2024-07-15 08:29:52.514763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:03.316 [2024-07-15 08:29:52.514786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:46568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.316 [2024-07-15 08:29:52.514801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:03.316 [2024-07-15 08:29:52.514823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:46584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.316 [2024-07-15 08:29:52.514837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:03.316 [2024-07-15 08:29:52.514872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:46600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.316 [2024-07-15 08:29:52.514888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:03.316 [2024-07-15 08:29:52.514909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:46616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.316 [2024-07-15 08:29:52.514925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:03.316 [2024-07-15 08:29:52.514946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:46632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.316 [2024-07-15 08:29:52.514960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:03.316 [2024-07-15 08:29:52.514981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:46648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.316 [2024-07-15 08:29:52.514996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:03.316 [2024-07-15 08:29:52.515017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:46664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.316 [2024-07-15 08:29:52.515032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:03.316 [2024-07-15 08:29:52.515054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:46136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.316 [2024-07-15 08:29:52.515069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:03.316 [2024-07-15 08:29:52.515090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:46168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.316 [2024-07-15 08:29:52.515105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:03.316 [2024-07-15 08:29:52.515126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:46200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.316 [2024-07-15 08:29:52.515156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:03.316 [2024-07-15 08:29:52.515177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:46232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.316 [2024-07-15 08:29:52.515191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:03.316 [2024-07-15 08:29:52.515211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:46264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.316 [2024-07-15 08:29:52.515226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.316 [2024-07-15 08:29:52.515246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:46288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.316 [2024-07-15 08:29:52.515260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.316 [2024-07-15 08:29:52.515280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:46320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.316 [2024-07-15 08:29:52.515324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:03.316 [2024-07-15 08:29:52.515361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:46352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.316 [2024-07-15 08:29:52.515377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:03.316 [2024-07-15 08:29:52.515406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:46688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.316 [2024-07-15 08:29:52.515421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:03.316 [2024-07-15 08:29:52.515442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:46384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.316 [2024-07-15 08:29:52.515456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:03.316 [2024-07-15 08:29:52.515477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:46416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.316 [2024-07-15 08:29:52.515492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:03.316 [2024-07-15 08:29:52.515512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:46696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.316 [2024-07-15 08:29:52.515527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:03.316 [2024-07-15 08:29:52.515548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:46712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.316 [2024-07-15 08:29:52.515563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:03.316 [2024-07-15 08:29:52.515584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:46728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.316 [2024-07-15 08:29:52.515599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:03.317 [2024-07-15 08:29:52.515619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:46744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.317 [2024-07-15 08:29:52.515634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:03.317 [2024-07-15 08:29:52.515655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:46760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.317 [2024-07-15 08:29:52.515670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:03.317 [2024-07-15 08:29:52.515691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:46776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.317 [2024-07-15 08:29:52.515707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:03.317 [2024-07-15 08:29:52.515727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:46792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.317 [2024-07-15 08:29:52.515772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:03.317 [2024-07-15 08:29:52.515794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:46808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.317 [2024-07-15 08:29:52.515809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:03.317 [2024-07-15 08:29:52.515830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:46120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.317 [2024-07-15 08:29:52.515853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:03.317 [2024-07-15 08:29:52.517302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:46832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.317 [2024-07-15 08:29:52.517332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:03.317 [2024-07-15 08:29:52.517360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:46848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.317 [2024-07-15 08:29:52.517377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:03.317 [2024-07-15 08:29:52.517398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:46864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.317 [2024-07-15 08:29:52.517412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:03.317 [2024-07-15 08:29:52.517432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:46880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.317 [2024-07-15 08:29:52.517447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:03.317 [2024-07-15 08:29:52.517467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:46896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.317 [2024-07-15 08:29:52.517482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:03.317 [2024-07-15 08:29:52.517502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:46912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.317 [2024-07-15 08:29:52.517516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:03.317 [2024-07-15 08:29:52.517537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:46928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.317 [2024-07-15 08:29:52.517552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:03.317 [2024-07-15 08:29:52.517572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:46944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.317 [2024-07-15 08:29:52.517587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:03.317 [2024-07-15 08:29:52.517608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:46960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.317 [2024-07-15 08:29:52.517623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:03.317 [2024-07-15 08:29:52.517642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:46976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.317 [2024-07-15 08:29:52.517657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:03.317 [2024-07-15 08:29:52.517677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:46992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.317 [2024-07-15 08:29:52.517691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:03.317 [2024-07-15 08:29:52.517711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:47008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.317 [2024-07-15 08:29:52.517767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:03.317 [2024-07-15 08:29:52.517792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:47024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.317 [2024-07-15 08:29:52.517808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:03.317 Received shutdown signal, test time was about 33.189152 seconds 00:17:03.317 00:17:03.317 Latency(us) 00:17:03.317 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:03.317 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:03.317 Verification LBA range: start 0x0 length 0x4000 00:17:03.317 Nvme0n1 : 33.19 8829.31 34.49 0.00 0.00 14465.29 139.64 4026531.84 00:17:03.317 =================================================================================================================== 00:17:03.317 Total : 8829.31 34.49 0.00 0.00 14465.29 139.64 4026531.84 00:17:03.317 08:29:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:03.576 08:29:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:17:03.576 08:29:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:03.576 08:29:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:17:03.576 08:29:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:03.576 08:29:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:17:03.834 08:29:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:03.834 08:29:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:17:03.834 08:29:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:03.834 08:29:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:03.834 rmmod nvme_tcp 00:17:03.834 rmmod nvme_fabrics 00:17:03.834 rmmod nvme_keyring 00:17:03.834 08:29:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:03.834 08:29:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:17:03.834 08:29:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:17:03.834 08:29:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 76979 ']' 00:17:03.834 08:29:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 76979 00:17:03.834 08:29:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 76979 ']' 00:17:03.834 08:29:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 76979 00:17:03.834 08:29:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:17:03.834 08:29:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:03.834 08:29:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76979 00:17:03.834 08:29:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:03.834 killing process with pid 76979 00:17:03.834 08:29:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:03.834 08:29:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76979' 00:17:03.834 08:29:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 76979 00:17:03.834 08:29:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 76979 00:17:04.092 08:29:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:04.093 08:29:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:04.093 08:29:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:04.093 08:29:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:04.093 08:29:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:04.093 08:29:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:04.093 08:29:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:04.093 08:29:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:04.093 08:29:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:04.093 00:17:04.093 real 0m39.015s 00:17:04.093 user 2m6.179s 00:17:04.093 sys 0m11.463s 00:17:04.093 08:29:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:04.093 08:29:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:17:04.093 ************************************ 00:17:04.093 END TEST nvmf_host_multipath_status 00:17:04.093 ************************************ 00:17:04.093 08:29:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:04.093 08:29:56 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:17:04.093 08:29:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:04.093 08:29:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:04.093 08:29:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:04.093 ************************************ 00:17:04.093 START TEST nvmf_discovery_remove_ifc 00:17:04.093 ************************************ 00:17:04.093 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:17:04.093 * Looking for test storage... 00:17:04.093 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:04.093 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:04.093 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:17:04.093 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:04.093 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:04.093 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:04.093 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:04.093 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:04.093 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:04.093 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:04.093 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:04.093 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:04.093 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:04.093 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:17:04.093 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:17:04.093 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:04.093 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:04.093 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:04.093 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:04.093 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:04.093 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:04.093 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:04.093 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:04.093 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:04.093 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:04.093 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:04.093 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:17:04.093 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:04.093 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:17:04.093 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:04.093 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:04.093 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:04.093 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:04.093 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:04.352 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:04.352 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:04.352 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:04.352 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:17:04.352 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:17:04.352 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:17:04.352 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:17:04.352 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:17:04.352 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:17:04.352 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:17:04.352 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:04.352 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:04.352 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:04.352 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:04.352 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:04.352 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:04.352 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:04.352 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:04.352 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:04.352 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:04.352 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:04.352 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:04.352 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:04.352 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:04.352 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:04.352 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:04.352 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:04.352 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:04.352 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:04.352 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:04.352 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:04.352 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:04.352 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:04.352 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:04.352 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:04.352 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:04.352 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:04.352 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:04.352 Cannot find device "nvmf_tgt_br" 00:17:04.352 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # true 00:17:04.352 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:04.352 Cannot find device "nvmf_tgt_br2" 00:17:04.352 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # true 00:17:04.352 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:04.352 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:04.352 Cannot find device "nvmf_tgt_br" 00:17:04.352 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # true 00:17:04.352 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:04.352 Cannot find device "nvmf_tgt_br2" 00:17:04.352 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # true 00:17:04.352 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:04.352 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:04.352 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:04.352 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:04.352 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:17:04.352 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:04.352 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:04.352 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:17:04.352 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:04.352 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:04.352 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:04.352 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:04.352 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:04.352 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:04.352 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:04.352 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:04.352 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:04.352 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:04.353 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:04.353 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:04.353 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:04.353 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:04.353 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:04.353 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:04.612 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:04.612 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:04.612 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:04.612 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:04.612 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:04.612 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:04.612 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:04.612 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:04.612 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:04.612 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.109 ms 00:17:04.612 00:17:04.612 --- 10.0.0.2 ping statistics --- 00:17:04.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:04.612 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:17:04.612 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:04.612 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:04.612 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:17:04.612 00:17:04.612 --- 10.0.0.3 ping statistics --- 00:17:04.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:04.612 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:17:04.612 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:04.612 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:04.612 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:17:04.612 00:17:04.612 --- 10.0.0.1 ping statistics --- 00:17:04.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:04.612 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:17:04.612 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:04.612 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@433 -- # return 0 00:17:04.612 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:04.612 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:04.612 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:04.612 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:04.612 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:04.612 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:04.612 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:04.612 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:17:04.612 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:04.612 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:04.612 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:04.612 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=77810 00:17:04.612 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:04.612 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 77810 00:17:04.612 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 77810 ']' 00:17:04.612 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:04.612 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:04.612 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:04.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:04.612 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:04.612 08:29:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:04.612 [2024-07-15 08:29:56.687646] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:04.612 [2024-07-15 08:29:56.687763] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:04.871 [2024-07-15 08:29:56.826802] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:04.871 [2024-07-15 08:29:56.928374] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:04.871 [2024-07-15 08:29:56.928438] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:04.871 [2024-07-15 08:29:56.928449] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:04.871 [2024-07-15 08:29:56.928456] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:04.871 [2024-07-15 08:29:56.928462] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:04.871 [2024-07-15 08:29:56.928485] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:04.871 [2024-07-15 08:29:56.981321] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:05.806 08:29:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:05.806 08:29:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:17:05.806 08:29:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:05.806 08:29:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:05.806 08:29:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:05.806 08:29:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:05.806 08:29:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:17:05.806 08:29:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.806 08:29:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:05.806 [2024-07-15 08:29:57.694538] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:05.806 [2024-07-15 08:29:57.702601] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:17:05.806 null0 00:17:05.806 [2024-07-15 08:29:57.734535] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:05.806 08:29:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.806 08:29:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=77838 00:17:05.806 08:29:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:17:05.806 08:29:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 77838 /tmp/host.sock 00:17:05.806 08:29:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 77838 ']' 00:17:05.806 08:29:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:17:05.806 08:29:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:05.806 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:17:05.806 08:29:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:17:05.807 08:29:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:05.807 08:29:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:05.807 [2024-07-15 08:29:57.819225] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:05.807 [2024-07-15 08:29:57.819364] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77838 ] 00:17:05.807 [2024-07-15 08:29:57.963957] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:06.065 [2024-07-15 08:29:58.090072] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:06.632 08:29:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:06.632 08:29:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:17:06.632 08:29:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:06.632 08:29:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:17:06.632 08:29:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.632 08:29:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:06.632 08:29:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.632 08:29:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:17:06.632 08:29:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.632 08:29:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:06.632 [2024-07-15 08:29:58.806901] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:06.889 08:29:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.889 08:29:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:17:06.889 08:29:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.889 08:29:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:07.824 [2024-07-15 08:29:59.857658] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:17:07.824 [2024-07-15 08:29:59.857705] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:17:07.824 [2024-07-15 08:29:59.857732] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:17:07.824 [2024-07-15 08:29:59.863719] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:17:07.824 [2024-07-15 08:29:59.921026] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:17:07.824 [2024-07-15 08:29:59.921100] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:17:07.824 [2024-07-15 08:29:59.921132] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:17:07.824 [2024-07-15 08:29:59.921153] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:17:07.824 [2024-07-15 08:29:59.921179] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:17:07.824 08:29:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.824 08:29:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:17:07.824 08:29:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:07.824 [2024-07-15 08:29:59.926187] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x89dde0 was disconnected and freed. delete nvme_qpair. 00:17:07.824 08:29:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:07.824 08:29:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.824 08:29:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:07.824 08:29:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:07.824 08:29:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:07.824 08:29:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:07.824 08:29:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.824 08:29:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:17:07.824 08:29:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:17:07.824 08:29:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:17:07.824 08:29:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:17:07.824 08:29:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:08.083 08:29:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:08.083 08:29:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:08.083 08:29:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:08.083 08:29:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.083 08:29:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:08.083 08:29:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:08.083 08:30:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.083 08:30:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:08.083 08:30:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:09.043 08:30:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:09.043 08:30:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:09.043 08:30:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:09.043 08:30:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.043 08:30:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:09.043 08:30:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:09.044 08:30:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:09.044 08:30:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.044 08:30:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:09.044 08:30:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:09.976 08:30:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:09.976 08:30:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:09.976 08:30:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:09.976 08:30:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.976 08:30:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:09.976 08:30:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:09.976 08:30:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:09.976 08:30:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.234 08:30:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:10.234 08:30:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:11.175 08:30:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:11.175 08:30:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:11.175 08:30:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:11.175 08:30:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.175 08:30:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:11.175 08:30:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:11.175 08:30:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:11.175 08:30:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.175 08:30:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:11.175 08:30:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:12.116 08:30:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:12.116 08:30:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:12.116 08:30:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:12.116 08:30:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.116 08:30:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:12.116 08:30:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:12.116 08:30:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:12.116 08:30:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.424 08:30:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:12.424 08:30:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:13.358 08:30:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:13.358 08:30:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:13.358 08:30:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:13.358 08:30:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.358 08:30:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:13.358 08:30:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:13.358 08:30:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:13.358 08:30:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.358 [2024-07-15 08:30:05.348807] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:17:13.358 [2024-07-15 08:30:05.348863] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:13.358 [2024-07-15 08:30:05.348878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:13.358 [2024-07-15 08:30:05.348891] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:13.358 [2024-07-15 08:30:05.348901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:13.358 [2024-07-15 08:30:05.348911] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:13.358 [2024-07-15 08:30:05.348921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:13.358 [2024-07-15 08:30:05.348931] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:13.358 [2024-07-15 08:30:05.348940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:13.358 [2024-07-15 08:30:05.348950] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:13.358 [2024-07-15 08:30:05.348959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:13.358 [2024-07-15 08:30:05.348969] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x803ac0 is same with the state(5) to be set 00:17:13.358 08:30:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:13.358 08:30:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:13.358 [2024-07-15 08:30:05.358803] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x803ac0 (9): Bad file descriptor 00:17:13.358 [2024-07-15 08:30:05.368829] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:14.294 08:30:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:14.294 08:30:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:14.294 08:30:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:14.294 08:30:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:14.294 08:30:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:14.294 08:30:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.294 08:30:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:14.294 [2024-07-15 08:30:06.376819] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:17:14.294 [2024-07-15 08:30:06.376907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x803ac0 with addr=10.0.0.2, port=4420 00:17:14.294 [2024-07-15 08:30:06.376932] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x803ac0 is same with the state(5) to be set 00:17:14.294 [2024-07-15 08:30:06.376984] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x803ac0 (9): Bad file descriptor 00:17:14.295 [2024-07-15 08:30:06.377545] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.295 [2024-07-15 08:30:06.377591] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:14.295 [2024-07-15 08:30:06.377606] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:14.295 [2024-07-15 08:30:06.377621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:14.295 [2024-07-15 08:30:06.377652] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:14.295 [2024-07-15 08:30:06.377668] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:14.295 08:30:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.295 08:30:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:14.295 08:30:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:15.230 [2024-07-15 08:30:07.377722] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:15.230 [2024-07-15 08:30:07.377817] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:15.230 [2024-07-15 08:30:07.377831] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:15.230 [2024-07-15 08:30:07.377841] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:17:15.230 [2024-07-15 08:30:07.377867] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:15.230 [2024-07-15 08:30:07.377898] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:17:15.230 [2024-07-15 08:30:07.377955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:15.230 [2024-07-15 08:30:07.377972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.230 [2024-07-15 08:30:07.377985] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:15.230 [2024-07-15 08:30:07.377994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.230 [2024-07-15 08:30:07.378004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:15.230 [2024-07-15 08:30:07.378013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.230 [2024-07-15 08:30:07.378023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:15.230 [2024-07-15 08:30:07.378032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.230 [2024-07-15 08:30:07.378042] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:15.230 [2024-07-15 08:30:07.378051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.230 [2024-07-15 08:30:07.378060] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:17:15.230 [2024-07-15 08:30:07.378619] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807860 (9): Bad file descriptor 00:17:15.230 [2024-07-15 08:30:07.379646] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:17:15.230 [2024-07-15 08:30:07.379672] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:17:15.489 08:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:15.489 08:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:15.489 08:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.489 08:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:15.489 08:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:15.489 08:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:15.489 08:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:15.489 08:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.489 08:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:17:15.489 08:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:15.489 08:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:15.489 08:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:17:15.489 08:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:15.489 08:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:15.489 08:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:15.489 08:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.489 08:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:15.489 08:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:15.489 08:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:15.489 08:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.489 08:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:17:15.489 08:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:16.426 08:30:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:16.426 08:30:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:16.426 08:30:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.426 08:30:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:16.426 08:30:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:16.426 08:30:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:16.426 08:30:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:16.426 08:30:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.687 08:30:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:17:16.687 08:30:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:17.253 [2024-07-15 08:30:09.382808] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:17:17.253 [2024-07-15 08:30:09.382899] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:17:17.253 [2024-07-15 08:30:09.382919] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:17:17.253 [2024-07-15 08:30:09.388860] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:17:17.512 [2024-07-15 08:30:09.445444] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:17:17.512 [2024-07-15 08:30:09.445513] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:17:17.512 [2024-07-15 08:30:09.445540] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:17:17.512 [2024-07-15 08:30:09.445556] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:17:17.512 [2024-07-15 08:30:09.445565] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:17:17.512 [2024-07-15 08:30:09.451557] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x8aad90 was disconnected and freed. delete nvme_qpair. 00:17:17.512 08:30:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:17.512 08:30:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:17.512 08:30:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:17.512 08:30:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.512 08:30:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:17.512 08:30:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:17.512 08:30:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:17.512 08:30:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.512 08:30:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:17:17.512 08:30:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:17:17.512 08:30:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 77838 00:17:17.512 08:30:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 77838 ']' 00:17:17.512 08:30:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 77838 00:17:17.512 08:30:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:17:17.512 08:30:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:17.512 08:30:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77838 00:17:17.512 killing process with pid 77838 00:17:17.512 08:30:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:17.512 08:30:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:17.512 08:30:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77838' 00:17:17.512 08:30:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 77838 00:17:17.512 08:30:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 77838 00:17:18.078 08:30:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:17:18.078 08:30:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:18.078 08:30:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:17:18.078 08:30:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:18.078 08:30:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:17:18.078 08:30:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:18.078 08:30:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:18.078 rmmod nvme_tcp 00:17:18.078 rmmod nvme_fabrics 00:17:18.078 rmmod nvme_keyring 00:17:18.078 08:30:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:18.078 08:30:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:17:18.078 08:30:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:17:18.078 08:30:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 77810 ']' 00:17:18.078 08:30:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 77810 00:17:18.078 08:30:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 77810 ']' 00:17:18.078 08:30:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 77810 00:17:18.078 08:30:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:17:18.078 08:30:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:18.078 08:30:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77810 00:17:18.078 killing process with pid 77810 00:17:18.078 08:30:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:18.078 08:30:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:18.078 08:30:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77810' 00:17:18.078 08:30:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 77810 00:17:18.078 08:30:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 77810 00:17:18.336 08:30:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:18.336 08:30:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:18.336 08:30:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:18.336 08:30:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:18.336 08:30:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:18.336 08:30:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:18.336 08:30:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:18.336 08:30:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:18.336 08:30:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:18.336 ************************************ 00:17:18.336 END TEST nvmf_discovery_remove_ifc 00:17:18.336 ************************************ 00:17:18.336 00:17:18.336 real 0m14.184s 00:17:18.336 user 0m24.479s 00:17:18.336 sys 0m2.512s 00:17:18.336 08:30:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:18.336 08:30:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:18.336 08:30:10 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:18.336 08:30:10 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:17:18.336 08:30:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:18.336 08:30:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:18.336 08:30:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:18.336 ************************************ 00:17:18.336 START TEST nvmf_identify_kernel_target 00:17:18.336 ************************************ 00:17:18.336 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:17:18.336 * Looking for test storage... 00:17:18.336 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:18.336 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:18.336 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:17:18.336 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:18.336 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:18.336 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:18.336 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:18.336 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:18.336 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:18.336 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:18.336 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:18.336 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:18.336 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:18.336 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:17:18.336 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:17:18.336 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:18.336 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:18.336 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:18.336 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:18.336 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:18.336 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:18.336 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:18.336 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:18.336 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.336 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.336 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.336 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:17:18.336 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.336 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:17:18.336 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:18.336 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:18.336 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:18.336 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:18.336 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:18.336 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:18.336 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:18.336 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:18.336 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:17:18.336 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:18.336 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:18.336 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:18.336 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:18.336 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:18.336 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:18.336 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:18.336 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:18.594 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:18.594 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:18.594 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:18.595 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:18.595 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:18.595 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:18.595 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:18.595 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:18.595 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:18.595 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:18.595 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:18.595 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:18.595 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:18.595 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:18.595 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:18.595 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:18.595 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:18.595 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:18.595 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:18.595 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:18.595 Cannot find device "nvmf_tgt_br" 00:17:18.595 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # true 00:17:18.595 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:18.595 Cannot find device "nvmf_tgt_br2" 00:17:18.595 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # true 00:17:18.595 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:18.595 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:18.595 Cannot find device "nvmf_tgt_br" 00:17:18.595 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # true 00:17:18.595 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:18.595 Cannot find device "nvmf_tgt_br2" 00:17:18.595 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # true 00:17:18.595 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:18.595 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:18.595 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:18.595 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:18.595 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:17:18.595 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:18.595 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:18.595 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:17:18.595 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:18.595 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:18.595 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:18.595 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:18.595 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:18.595 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:18.595 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:18.595 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:18.595 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:18.595 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:18.595 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:18.595 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:18.595 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:18.595 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:18.595 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:18.854 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:18.854 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:18.854 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:18.854 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:18.854 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:18.854 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:18.854 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:18.854 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:18.854 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:18.854 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:18.854 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:17:18.854 00:17:18.854 --- 10.0.0.2 ping statistics --- 00:17:18.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:18.854 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:17:18.854 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:18.854 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:18.854 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:17:18.854 00:17:18.854 --- 10.0.0.3 ping statistics --- 00:17:18.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:18.854 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:17:18.854 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:18.854 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:18.854 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:17:18.854 00:17:18.854 --- 10.0.0.1 ping statistics --- 00:17:18.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:18.854 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:17:18.854 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:18.854 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@433 -- # return 0 00:17:18.854 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:18.854 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:18.854 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:18.854 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:18.854 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:18.854 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:18.854 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:18.854 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:17:18.854 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:17:18.854 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:17:18.854 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:18.854 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:18.854 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:18.854 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:18.854 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:18.854 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:18.854 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:18.854 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:18.854 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:18.854 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:17:18.854 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:17:18.854 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:17:18.854 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:17:18.854 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:17:18.854 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:17:18.854 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:17:18.854 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:17:18.854 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:17:18.854 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:17:18.854 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:17:18.854 08:30:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:17:19.111 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:19.111 Waiting for block devices as requested 00:17:19.369 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:17:19.369 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:17:19.369 08:30:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:19.369 08:30:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:17:19.369 08:30:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:17:19.369 08:30:11 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:17:19.369 08:30:11 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:17:19.369 08:30:11 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:19.369 08:30:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:17:19.369 08:30:11 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:17:19.369 08:30:11 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:17:19.369 No valid GPT data, bailing 00:17:19.369 08:30:11 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:17:19.369 08:30:11 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:17:19.369 08:30:11 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:17:19.369 08:30:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:17:19.628 08:30:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:19.628 08:30:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:17:19.628 08:30:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:17:19.628 08:30:11 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:17:19.628 08:30:11 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:17:19.628 08:30:11 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:19.628 08:30:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:17:19.628 08:30:11 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:17:19.628 08:30:11 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:17:19.628 No valid GPT data, bailing 00:17:19.628 08:30:11 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:17:19.628 08:30:11 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:17:19.628 08:30:11 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:17:19.628 08:30:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:17:19.628 08:30:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:19.628 08:30:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:17:19.628 08:30:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:17:19.628 08:30:11 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:17:19.628 08:30:11 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:17:19.628 08:30:11 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:19.628 08:30:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:17:19.628 08:30:11 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:17:19.628 08:30:11 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:17:19.628 No valid GPT data, bailing 00:17:19.628 08:30:11 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:17:19.628 08:30:11 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:17:19.628 08:30:11 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:17:19.628 08:30:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:17:19.628 08:30:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:19.628 08:30:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:17:19.628 08:30:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:17:19.628 08:30:11 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:17:19.628 08:30:11 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:17:19.628 08:30:11 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:19.628 08:30:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:17:19.628 08:30:11 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:17:19.628 08:30:11 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:17:19.628 No valid GPT data, bailing 00:17:19.628 08:30:11 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:17:19.628 08:30:11 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:17:19.628 08:30:11 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:17:19.628 08:30:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:17:19.628 08:30:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:17:19.628 08:30:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:17:19.628 08:30:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:17:19.628 08:30:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:17:19.628 08:30:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:17:19.628 08:30:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:17:19.628 08:30:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:17:19.628 08:30:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:17:19.628 08:30:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:17:19.628 08:30:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:17:19.628 08:30:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:17:19.628 08:30:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:17:19.628 08:30:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:17:19.886 08:30:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --hostid=cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -a 10.0.0.1 -t tcp -s 4420 00:17:19.886 00:17:19.886 Discovery Log Number of Records 2, Generation counter 2 00:17:19.886 =====Discovery Log Entry 0====== 00:17:19.886 trtype: tcp 00:17:19.886 adrfam: ipv4 00:17:19.886 subtype: current discovery subsystem 00:17:19.886 treq: not specified, sq flow control disable supported 00:17:19.886 portid: 1 00:17:19.886 trsvcid: 4420 00:17:19.886 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:19.886 traddr: 10.0.0.1 00:17:19.886 eflags: none 00:17:19.886 sectype: none 00:17:19.886 =====Discovery Log Entry 1====== 00:17:19.886 trtype: tcp 00:17:19.886 adrfam: ipv4 00:17:19.886 subtype: nvme subsystem 00:17:19.886 treq: not specified, sq flow control disable supported 00:17:19.886 portid: 1 00:17:19.886 trsvcid: 4420 00:17:19.886 subnqn: nqn.2016-06.io.spdk:testnqn 00:17:19.886 traddr: 10.0.0.1 00:17:19.886 eflags: none 00:17:19.886 sectype: none 00:17:19.886 08:30:11 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:17:19.886 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:17:19.886 ===================================================== 00:17:19.886 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:17:19.886 ===================================================== 00:17:19.886 Controller Capabilities/Features 00:17:19.886 ================================ 00:17:19.886 Vendor ID: 0000 00:17:19.886 Subsystem Vendor ID: 0000 00:17:19.886 Serial Number: cb90b9cc70225fa64415 00:17:19.886 Model Number: Linux 00:17:19.886 Firmware Version: 6.7.0-68 00:17:19.886 Recommended Arb Burst: 0 00:17:19.886 IEEE OUI Identifier: 00 00 00 00:17:19.886 Multi-path I/O 00:17:19.886 May have multiple subsystem ports: No 00:17:19.886 May have multiple controllers: No 00:17:19.886 Associated with SR-IOV VF: No 00:17:19.886 Max Data Transfer Size: Unlimited 00:17:19.886 Max Number of Namespaces: 0 00:17:19.886 Max Number of I/O Queues: 1024 00:17:19.886 NVMe Specification Version (VS): 1.3 00:17:19.886 NVMe Specification Version (Identify): 1.3 00:17:19.886 Maximum Queue Entries: 1024 00:17:19.886 Contiguous Queues Required: No 00:17:19.886 Arbitration Mechanisms Supported 00:17:19.886 Weighted Round Robin: Not Supported 00:17:19.886 Vendor Specific: Not Supported 00:17:19.886 Reset Timeout: 7500 ms 00:17:19.886 Doorbell Stride: 4 bytes 00:17:19.886 NVM Subsystem Reset: Not Supported 00:17:19.886 Command Sets Supported 00:17:19.886 NVM Command Set: Supported 00:17:19.886 Boot Partition: Not Supported 00:17:19.886 Memory Page Size Minimum: 4096 bytes 00:17:19.886 Memory Page Size Maximum: 4096 bytes 00:17:19.886 Persistent Memory Region: Not Supported 00:17:19.886 Optional Asynchronous Events Supported 00:17:19.886 Namespace Attribute Notices: Not Supported 00:17:19.886 Firmware Activation Notices: Not Supported 00:17:19.886 ANA Change Notices: Not Supported 00:17:19.886 PLE Aggregate Log Change Notices: Not Supported 00:17:19.886 LBA Status Info Alert Notices: Not Supported 00:17:19.886 EGE Aggregate Log Change Notices: Not Supported 00:17:19.886 Normal NVM Subsystem Shutdown event: Not Supported 00:17:19.886 Zone Descriptor Change Notices: Not Supported 00:17:19.886 Discovery Log Change Notices: Supported 00:17:19.886 Controller Attributes 00:17:19.886 128-bit Host Identifier: Not Supported 00:17:19.886 Non-Operational Permissive Mode: Not Supported 00:17:19.886 NVM Sets: Not Supported 00:17:19.886 Read Recovery Levels: Not Supported 00:17:19.886 Endurance Groups: Not Supported 00:17:19.886 Predictable Latency Mode: Not Supported 00:17:19.886 Traffic Based Keep ALive: Not Supported 00:17:19.886 Namespace Granularity: Not Supported 00:17:19.886 SQ Associations: Not Supported 00:17:19.886 UUID List: Not Supported 00:17:19.886 Multi-Domain Subsystem: Not Supported 00:17:19.886 Fixed Capacity Management: Not Supported 00:17:19.886 Variable Capacity Management: Not Supported 00:17:19.886 Delete Endurance Group: Not Supported 00:17:19.886 Delete NVM Set: Not Supported 00:17:19.886 Extended LBA Formats Supported: Not Supported 00:17:19.886 Flexible Data Placement Supported: Not Supported 00:17:19.886 00:17:19.886 Controller Memory Buffer Support 00:17:19.886 ================================ 00:17:19.886 Supported: No 00:17:19.886 00:17:19.886 Persistent Memory Region Support 00:17:19.886 ================================ 00:17:19.886 Supported: No 00:17:19.886 00:17:19.886 Admin Command Set Attributes 00:17:19.886 ============================ 00:17:19.886 Security Send/Receive: Not Supported 00:17:19.886 Format NVM: Not Supported 00:17:19.886 Firmware Activate/Download: Not Supported 00:17:19.886 Namespace Management: Not Supported 00:17:19.886 Device Self-Test: Not Supported 00:17:19.886 Directives: Not Supported 00:17:19.886 NVMe-MI: Not Supported 00:17:19.886 Virtualization Management: Not Supported 00:17:19.886 Doorbell Buffer Config: Not Supported 00:17:19.886 Get LBA Status Capability: Not Supported 00:17:19.886 Command & Feature Lockdown Capability: Not Supported 00:17:19.886 Abort Command Limit: 1 00:17:19.886 Async Event Request Limit: 1 00:17:19.886 Number of Firmware Slots: N/A 00:17:19.886 Firmware Slot 1 Read-Only: N/A 00:17:19.886 Firmware Activation Without Reset: N/A 00:17:19.886 Multiple Update Detection Support: N/A 00:17:19.886 Firmware Update Granularity: No Information Provided 00:17:19.886 Per-Namespace SMART Log: No 00:17:19.886 Asymmetric Namespace Access Log Page: Not Supported 00:17:19.886 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:17:19.886 Command Effects Log Page: Not Supported 00:17:19.886 Get Log Page Extended Data: Supported 00:17:19.886 Telemetry Log Pages: Not Supported 00:17:19.886 Persistent Event Log Pages: Not Supported 00:17:19.886 Supported Log Pages Log Page: May Support 00:17:19.886 Commands Supported & Effects Log Page: Not Supported 00:17:19.886 Feature Identifiers & Effects Log Page:May Support 00:17:19.886 NVMe-MI Commands & Effects Log Page: May Support 00:17:19.886 Data Area 4 for Telemetry Log: Not Supported 00:17:19.886 Error Log Page Entries Supported: 1 00:17:19.886 Keep Alive: Not Supported 00:17:19.886 00:17:19.886 NVM Command Set Attributes 00:17:19.886 ========================== 00:17:19.886 Submission Queue Entry Size 00:17:19.886 Max: 1 00:17:19.886 Min: 1 00:17:19.886 Completion Queue Entry Size 00:17:19.886 Max: 1 00:17:19.886 Min: 1 00:17:19.886 Number of Namespaces: 0 00:17:19.886 Compare Command: Not Supported 00:17:19.886 Write Uncorrectable Command: Not Supported 00:17:19.886 Dataset Management Command: Not Supported 00:17:19.886 Write Zeroes Command: Not Supported 00:17:19.886 Set Features Save Field: Not Supported 00:17:19.886 Reservations: Not Supported 00:17:19.886 Timestamp: Not Supported 00:17:19.886 Copy: Not Supported 00:17:19.886 Volatile Write Cache: Not Present 00:17:19.886 Atomic Write Unit (Normal): 1 00:17:19.886 Atomic Write Unit (PFail): 1 00:17:19.886 Atomic Compare & Write Unit: 1 00:17:19.886 Fused Compare & Write: Not Supported 00:17:19.886 Scatter-Gather List 00:17:19.886 SGL Command Set: Supported 00:17:19.886 SGL Keyed: Not Supported 00:17:19.886 SGL Bit Bucket Descriptor: Not Supported 00:17:19.886 SGL Metadata Pointer: Not Supported 00:17:19.886 Oversized SGL: Not Supported 00:17:19.887 SGL Metadata Address: Not Supported 00:17:19.887 SGL Offset: Supported 00:17:19.887 Transport SGL Data Block: Not Supported 00:17:19.887 Replay Protected Memory Block: Not Supported 00:17:19.887 00:17:19.887 Firmware Slot Information 00:17:19.887 ========================= 00:17:19.887 Active slot: 0 00:17:19.887 00:17:19.887 00:17:19.887 Error Log 00:17:19.887 ========= 00:17:19.887 00:17:19.887 Active Namespaces 00:17:19.887 ================= 00:17:19.887 Discovery Log Page 00:17:19.887 ================== 00:17:19.887 Generation Counter: 2 00:17:19.887 Number of Records: 2 00:17:19.887 Record Format: 0 00:17:19.887 00:17:19.887 Discovery Log Entry 0 00:17:19.887 ---------------------- 00:17:19.887 Transport Type: 3 (TCP) 00:17:19.887 Address Family: 1 (IPv4) 00:17:19.887 Subsystem Type: 3 (Current Discovery Subsystem) 00:17:19.887 Entry Flags: 00:17:19.887 Duplicate Returned Information: 0 00:17:19.887 Explicit Persistent Connection Support for Discovery: 0 00:17:19.887 Transport Requirements: 00:17:19.887 Secure Channel: Not Specified 00:17:19.887 Port ID: 1 (0x0001) 00:17:19.887 Controller ID: 65535 (0xffff) 00:17:19.887 Admin Max SQ Size: 32 00:17:19.887 Transport Service Identifier: 4420 00:17:19.887 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:17:19.887 Transport Address: 10.0.0.1 00:17:19.887 Discovery Log Entry 1 00:17:19.887 ---------------------- 00:17:19.887 Transport Type: 3 (TCP) 00:17:19.887 Address Family: 1 (IPv4) 00:17:19.887 Subsystem Type: 2 (NVM Subsystem) 00:17:19.887 Entry Flags: 00:17:19.887 Duplicate Returned Information: 0 00:17:19.887 Explicit Persistent Connection Support for Discovery: 0 00:17:19.887 Transport Requirements: 00:17:19.887 Secure Channel: Not Specified 00:17:19.887 Port ID: 1 (0x0001) 00:17:19.887 Controller ID: 65535 (0xffff) 00:17:19.887 Admin Max SQ Size: 32 00:17:19.887 Transport Service Identifier: 4420 00:17:19.887 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:17:19.887 Transport Address: 10.0.0.1 00:17:19.887 08:30:12 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:17:20.145 get_feature(0x01) failed 00:17:20.145 get_feature(0x02) failed 00:17:20.145 get_feature(0x04) failed 00:17:20.145 ===================================================== 00:17:20.145 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:17:20.145 ===================================================== 00:17:20.145 Controller Capabilities/Features 00:17:20.145 ================================ 00:17:20.145 Vendor ID: 0000 00:17:20.145 Subsystem Vendor ID: 0000 00:17:20.145 Serial Number: 45b124e5a0171c5cc14b 00:17:20.145 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:17:20.145 Firmware Version: 6.7.0-68 00:17:20.145 Recommended Arb Burst: 6 00:17:20.145 IEEE OUI Identifier: 00 00 00 00:17:20.145 Multi-path I/O 00:17:20.145 May have multiple subsystem ports: Yes 00:17:20.145 May have multiple controllers: Yes 00:17:20.145 Associated with SR-IOV VF: No 00:17:20.145 Max Data Transfer Size: Unlimited 00:17:20.145 Max Number of Namespaces: 1024 00:17:20.145 Max Number of I/O Queues: 128 00:17:20.145 NVMe Specification Version (VS): 1.3 00:17:20.145 NVMe Specification Version (Identify): 1.3 00:17:20.145 Maximum Queue Entries: 1024 00:17:20.145 Contiguous Queues Required: No 00:17:20.145 Arbitration Mechanisms Supported 00:17:20.145 Weighted Round Robin: Not Supported 00:17:20.145 Vendor Specific: Not Supported 00:17:20.145 Reset Timeout: 7500 ms 00:17:20.145 Doorbell Stride: 4 bytes 00:17:20.145 NVM Subsystem Reset: Not Supported 00:17:20.145 Command Sets Supported 00:17:20.145 NVM Command Set: Supported 00:17:20.145 Boot Partition: Not Supported 00:17:20.145 Memory Page Size Minimum: 4096 bytes 00:17:20.145 Memory Page Size Maximum: 4096 bytes 00:17:20.145 Persistent Memory Region: Not Supported 00:17:20.145 Optional Asynchronous Events Supported 00:17:20.145 Namespace Attribute Notices: Supported 00:17:20.145 Firmware Activation Notices: Not Supported 00:17:20.145 ANA Change Notices: Supported 00:17:20.145 PLE Aggregate Log Change Notices: Not Supported 00:17:20.145 LBA Status Info Alert Notices: Not Supported 00:17:20.145 EGE Aggregate Log Change Notices: Not Supported 00:17:20.145 Normal NVM Subsystem Shutdown event: Not Supported 00:17:20.145 Zone Descriptor Change Notices: Not Supported 00:17:20.145 Discovery Log Change Notices: Not Supported 00:17:20.145 Controller Attributes 00:17:20.145 128-bit Host Identifier: Supported 00:17:20.145 Non-Operational Permissive Mode: Not Supported 00:17:20.145 NVM Sets: Not Supported 00:17:20.145 Read Recovery Levels: Not Supported 00:17:20.145 Endurance Groups: Not Supported 00:17:20.145 Predictable Latency Mode: Not Supported 00:17:20.145 Traffic Based Keep ALive: Supported 00:17:20.145 Namespace Granularity: Not Supported 00:17:20.145 SQ Associations: Not Supported 00:17:20.145 UUID List: Not Supported 00:17:20.145 Multi-Domain Subsystem: Not Supported 00:17:20.145 Fixed Capacity Management: Not Supported 00:17:20.145 Variable Capacity Management: Not Supported 00:17:20.145 Delete Endurance Group: Not Supported 00:17:20.145 Delete NVM Set: Not Supported 00:17:20.145 Extended LBA Formats Supported: Not Supported 00:17:20.145 Flexible Data Placement Supported: Not Supported 00:17:20.145 00:17:20.145 Controller Memory Buffer Support 00:17:20.145 ================================ 00:17:20.145 Supported: No 00:17:20.145 00:17:20.145 Persistent Memory Region Support 00:17:20.145 ================================ 00:17:20.145 Supported: No 00:17:20.145 00:17:20.145 Admin Command Set Attributes 00:17:20.145 ============================ 00:17:20.145 Security Send/Receive: Not Supported 00:17:20.145 Format NVM: Not Supported 00:17:20.145 Firmware Activate/Download: Not Supported 00:17:20.145 Namespace Management: Not Supported 00:17:20.145 Device Self-Test: Not Supported 00:17:20.145 Directives: Not Supported 00:17:20.145 NVMe-MI: Not Supported 00:17:20.145 Virtualization Management: Not Supported 00:17:20.145 Doorbell Buffer Config: Not Supported 00:17:20.145 Get LBA Status Capability: Not Supported 00:17:20.145 Command & Feature Lockdown Capability: Not Supported 00:17:20.145 Abort Command Limit: 4 00:17:20.145 Async Event Request Limit: 4 00:17:20.145 Number of Firmware Slots: N/A 00:17:20.145 Firmware Slot 1 Read-Only: N/A 00:17:20.145 Firmware Activation Without Reset: N/A 00:17:20.145 Multiple Update Detection Support: N/A 00:17:20.145 Firmware Update Granularity: No Information Provided 00:17:20.145 Per-Namespace SMART Log: Yes 00:17:20.145 Asymmetric Namespace Access Log Page: Supported 00:17:20.145 ANA Transition Time : 10 sec 00:17:20.145 00:17:20.145 Asymmetric Namespace Access Capabilities 00:17:20.145 ANA Optimized State : Supported 00:17:20.145 ANA Non-Optimized State : Supported 00:17:20.145 ANA Inaccessible State : Supported 00:17:20.145 ANA Persistent Loss State : Supported 00:17:20.145 ANA Change State : Supported 00:17:20.145 ANAGRPID is not changed : No 00:17:20.145 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:17:20.145 00:17:20.145 ANA Group Identifier Maximum : 128 00:17:20.145 Number of ANA Group Identifiers : 128 00:17:20.145 Max Number of Allowed Namespaces : 1024 00:17:20.145 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:17:20.145 Command Effects Log Page: Supported 00:17:20.145 Get Log Page Extended Data: Supported 00:17:20.145 Telemetry Log Pages: Not Supported 00:17:20.145 Persistent Event Log Pages: Not Supported 00:17:20.145 Supported Log Pages Log Page: May Support 00:17:20.145 Commands Supported & Effects Log Page: Not Supported 00:17:20.145 Feature Identifiers & Effects Log Page:May Support 00:17:20.145 NVMe-MI Commands & Effects Log Page: May Support 00:17:20.145 Data Area 4 for Telemetry Log: Not Supported 00:17:20.145 Error Log Page Entries Supported: 128 00:17:20.146 Keep Alive: Supported 00:17:20.146 Keep Alive Granularity: 1000 ms 00:17:20.146 00:17:20.146 NVM Command Set Attributes 00:17:20.146 ========================== 00:17:20.146 Submission Queue Entry Size 00:17:20.146 Max: 64 00:17:20.146 Min: 64 00:17:20.146 Completion Queue Entry Size 00:17:20.146 Max: 16 00:17:20.146 Min: 16 00:17:20.146 Number of Namespaces: 1024 00:17:20.146 Compare Command: Not Supported 00:17:20.146 Write Uncorrectable Command: Not Supported 00:17:20.146 Dataset Management Command: Supported 00:17:20.146 Write Zeroes Command: Supported 00:17:20.146 Set Features Save Field: Not Supported 00:17:20.146 Reservations: Not Supported 00:17:20.146 Timestamp: Not Supported 00:17:20.146 Copy: Not Supported 00:17:20.146 Volatile Write Cache: Present 00:17:20.146 Atomic Write Unit (Normal): 1 00:17:20.146 Atomic Write Unit (PFail): 1 00:17:20.146 Atomic Compare & Write Unit: 1 00:17:20.146 Fused Compare & Write: Not Supported 00:17:20.146 Scatter-Gather List 00:17:20.146 SGL Command Set: Supported 00:17:20.146 SGL Keyed: Not Supported 00:17:20.146 SGL Bit Bucket Descriptor: Not Supported 00:17:20.146 SGL Metadata Pointer: Not Supported 00:17:20.146 Oversized SGL: Not Supported 00:17:20.146 SGL Metadata Address: Not Supported 00:17:20.146 SGL Offset: Supported 00:17:20.146 Transport SGL Data Block: Not Supported 00:17:20.146 Replay Protected Memory Block: Not Supported 00:17:20.146 00:17:20.146 Firmware Slot Information 00:17:20.146 ========================= 00:17:20.146 Active slot: 0 00:17:20.146 00:17:20.146 Asymmetric Namespace Access 00:17:20.146 =========================== 00:17:20.146 Change Count : 0 00:17:20.146 Number of ANA Group Descriptors : 1 00:17:20.146 ANA Group Descriptor : 0 00:17:20.146 ANA Group ID : 1 00:17:20.146 Number of NSID Values : 1 00:17:20.146 Change Count : 0 00:17:20.146 ANA State : 1 00:17:20.146 Namespace Identifier : 1 00:17:20.146 00:17:20.146 Commands Supported and Effects 00:17:20.146 ============================== 00:17:20.146 Admin Commands 00:17:20.146 -------------- 00:17:20.146 Get Log Page (02h): Supported 00:17:20.146 Identify (06h): Supported 00:17:20.146 Abort (08h): Supported 00:17:20.146 Set Features (09h): Supported 00:17:20.146 Get Features (0Ah): Supported 00:17:20.146 Asynchronous Event Request (0Ch): Supported 00:17:20.146 Keep Alive (18h): Supported 00:17:20.146 I/O Commands 00:17:20.146 ------------ 00:17:20.146 Flush (00h): Supported 00:17:20.146 Write (01h): Supported LBA-Change 00:17:20.146 Read (02h): Supported 00:17:20.146 Write Zeroes (08h): Supported LBA-Change 00:17:20.146 Dataset Management (09h): Supported 00:17:20.146 00:17:20.146 Error Log 00:17:20.146 ========= 00:17:20.146 Entry: 0 00:17:20.146 Error Count: 0x3 00:17:20.146 Submission Queue Id: 0x0 00:17:20.146 Command Id: 0x5 00:17:20.146 Phase Bit: 0 00:17:20.146 Status Code: 0x2 00:17:20.146 Status Code Type: 0x0 00:17:20.146 Do Not Retry: 1 00:17:20.146 Error Location: 0x28 00:17:20.146 LBA: 0x0 00:17:20.146 Namespace: 0x0 00:17:20.146 Vendor Log Page: 0x0 00:17:20.146 ----------- 00:17:20.146 Entry: 1 00:17:20.146 Error Count: 0x2 00:17:20.146 Submission Queue Id: 0x0 00:17:20.146 Command Id: 0x5 00:17:20.146 Phase Bit: 0 00:17:20.146 Status Code: 0x2 00:17:20.146 Status Code Type: 0x0 00:17:20.146 Do Not Retry: 1 00:17:20.146 Error Location: 0x28 00:17:20.146 LBA: 0x0 00:17:20.146 Namespace: 0x0 00:17:20.146 Vendor Log Page: 0x0 00:17:20.146 ----------- 00:17:20.146 Entry: 2 00:17:20.146 Error Count: 0x1 00:17:20.146 Submission Queue Id: 0x0 00:17:20.146 Command Id: 0x4 00:17:20.146 Phase Bit: 0 00:17:20.146 Status Code: 0x2 00:17:20.146 Status Code Type: 0x0 00:17:20.146 Do Not Retry: 1 00:17:20.146 Error Location: 0x28 00:17:20.146 LBA: 0x0 00:17:20.146 Namespace: 0x0 00:17:20.146 Vendor Log Page: 0x0 00:17:20.146 00:17:20.146 Number of Queues 00:17:20.146 ================ 00:17:20.146 Number of I/O Submission Queues: 128 00:17:20.146 Number of I/O Completion Queues: 128 00:17:20.146 00:17:20.146 ZNS Specific Controller Data 00:17:20.146 ============================ 00:17:20.146 Zone Append Size Limit: 0 00:17:20.146 00:17:20.146 00:17:20.146 Active Namespaces 00:17:20.146 ================= 00:17:20.146 get_feature(0x05) failed 00:17:20.146 Namespace ID:1 00:17:20.146 Command Set Identifier: NVM (00h) 00:17:20.146 Deallocate: Supported 00:17:20.146 Deallocated/Unwritten Error: Not Supported 00:17:20.146 Deallocated Read Value: Unknown 00:17:20.146 Deallocate in Write Zeroes: Not Supported 00:17:20.146 Deallocated Guard Field: 0xFFFF 00:17:20.146 Flush: Supported 00:17:20.146 Reservation: Not Supported 00:17:20.146 Namespace Sharing Capabilities: Multiple Controllers 00:17:20.146 Size (in LBAs): 1310720 (5GiB) 00:17:20.146 Capacity (in LBAs): 1310720 (5GiB) 00:17:20.146 Utilization (in LBAs): 1310720 (5GiB) 00:17:20.146 UUID: ca2243ed-9bd5-4ccb-9e55-81f52b8906fd 00:17:20.146 Thin Provisioning: Not Supported 00:17:20.146 Per-NS Atomic Units: Yes 00:17:20.146 Atomic Boundary Size (Normal): 0 00:17:20.146 Atomic Boundary Size (PFail): 0 00:17:20.146 Atomic Boundary Offset: 0 00:17:20.146 NGUID/EUI64 Never Reused: No 00:17:20.146 ANA group ID: 1 00:17:20.146 Namespace Write Protected: No 00:17:20.146 Number of LBA Formats: 1 00:17:20.146 Current LBA Format: LBA Format #00 00:17:20.146 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:17:20.146 00:17:20.146 08:30:12 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:17:20.146 08:30:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:20.146 08:30:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:17:20.146 08:30:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:20.146 08:30:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:17:20.146 08:30:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:20.146 08:30:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:20.146 rmmod nvme_tcp 00:17:20.146 rmmod nvme_fabrics 00:17:20.146 08:30:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:20.146 08:30:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:17:20.146 08:30:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:17:20.146 08:30:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:17:20.146 08:30:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:20.146 08:30:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:20.146 08:30:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:20.146 08:30:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:20.146 08:30:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:20.146 08:30:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:20.146 08:30:12 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:20.146 08:30:12 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:20.404 08:30:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:20.404 08:30:12 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:17:20.404 08:30:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:17:20.404 08:30:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:17:20.404 08:30:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:17:20.404 08:30:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:17:20.404 08:30:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:17:20.404 08:30:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:17:20.404 08:30:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:17:20.404 08:30:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:17:20.404 08:30:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:20.983 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:20.983 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:17:21.241 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:17:21.241 00:17:21.241 real 0m2.873s 00:17:21.241 user 0m1.015s 00:17:21.241 sys 0m1.316s 00:17:21.241 08:30:13 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:21.241 08:30:13 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.241 ************************************ 00:17:21.241 END TEST nvmf_identify_kernel_target 00:17:21.241 ************************************ 00:17:21.241 08:30:13 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:21.241 08:30:13 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:17:21.241 08:30:13 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:21.241 08:30:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:21.241 08:30:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:21.241 ************************************ 00:17:21.241 START TEST nvmf_auth_host 00:17:21.241 ************************************ 00:17:21.241 08:30:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:17:21.241 * Looking for test storage... 00:17:21.499 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:21.499 08:30:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:21.499 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:17:21.499 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:21.499 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:21.499 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:21.499 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:21.499 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:21.499 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:21.499 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:21.499 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:21.499 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:21.499 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:21.499 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:17:21.499 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:17:21.499 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:21.499 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:21.499 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:21.499 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:21.499 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:21.499 08:30:13 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:21.499 08:30:13 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:21.499 08:30:13 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:21.499 08:30:13 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.499 08:30:13 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.499 08:30:13 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.499 08:30:13 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:17:21.499 08:30:13 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.499 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:17:21.499 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:21.499 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:21.499 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:21.499 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:21.499 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:21.499 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:21.499 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:21.499 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:21.499 08:30:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:17:21.499 08:30:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:17:21.499 08:30:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:17:21.499 08:30:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:17:21.499 08:30:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:21.499 08:30:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:17:21.499 08:30:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:17:21.499 08:30:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:17:21.499 08:30:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:17:21.499 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:21.499 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:21.499 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:21.500 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:21.500 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:21.500 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:21.500 08:30:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:21.500 08:30:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:21.500 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:21.500 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:21.500 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:21.500 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:21.500 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:21.500 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:21.500 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:21.500 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:21.500 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:21.500 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:21.500 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:21.500 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:21.500 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:21.500 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:21.500 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:21.500 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:21.500 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:21.500 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:21.500 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:21.500 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:21.500 Cannot find device "nvmf_tgt_br" 00:17:21.500 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # true 00:17:21.500 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:21.500 Cannot find device "nvmf_tgt_br2" 00:17:21.500 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # true 00:17:21.500 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:21.500 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:21.500 Cannot find device "nvmf_tgt_br" 00:17:21.500 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # true 00:17:21.500 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:21.500 Cannot find device "nvmf_tgt_br2" 00:17:21.500 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # true 00:17:21.500 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:21.500 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:21.500 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:21.500 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:21.500 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:17:21.500 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:21.500 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:21.500 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:17:21.500 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:21.500 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:21.500 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:21.500 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:21.500 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:21.500 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:21.500 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:21.500 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:21.500 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:21.500 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:21.758 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:21.758 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:21.758 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:21.758 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:21.758 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:21.758 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:21.758 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:21.758 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:21.758 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:21.758 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:21.758 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:21.758 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:21.758 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:21.758 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:21.758 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:21.758 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.107 ms 00:17:21.758 00:17:21.758 --- 10.0.0.2 ping statistics --- 00:17:21.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:21.758 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:17:21.758 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:21.758 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:21.758 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:17:21.758 00:17:21.758 --- 10.0.0.3 ping statistics --- 00:17:21.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:21.758 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:17:21.758 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:21.758 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:21.758 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:17:21.758 00:17:21.758 --- 10.0.0.1 ping statistics --- 00:17:21.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:21.758 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:17:21.758 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:21.758 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@433 -- # return 0 00:17:21.758 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:21.758 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:21.758 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:21.758 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:21.758 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:21.758 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:21.758 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:21.758 08:30:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:17:21.758 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:21.758 08:30:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:21.758 08:30:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.758 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=78731 00:17:21.758 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 78731 00:17:21.758 08:30:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:17:21.758 08:30:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 78731 ']' 00:17:21.758 08:30:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:21.758 08:30:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:21.758 08:30:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:21.758 08:30:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:21.758 08:30:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.135 08:30:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:23.135 08:30:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:17:23.135 08:30:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:23.135 08:30:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:23.135 08:30:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.135 08:30:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:23.135 08:30:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:17:23.135 08:30:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:17:23.135 08:30:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:23.135 08:30:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:23.135 08:30:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:23.135 08:30:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:17:23.135 08:30:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:17:23.135 08:30:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:23.135 08:30:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=5226f80431dc9ca2e26f958e128ca3e2 00:17:23.135 08:30:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:17:23.135 08:30:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.pdX 00:17:23.135 08:30:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 5226f80431dc9ca2e26f958e128ca3e2 0 00:17:23.135 08:30:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 5226f80431dc9ca2e26f958e128ca3e2 0 00:17:23.135 08:30:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:23.135 08:30:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:23.135 08:30:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=5226f80431dc9ca2e26f958e128ca3e2 00:17:23.135 08:30:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:17:23.135 08:30:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:23.135 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.pdX 00:17:23.135 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.pdX 00:17:23.135 08:30:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.pdX 00:17:23.135 08:30:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:17:23.135 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:23.135 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:23.135 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:23.135 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:17:23.135 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:17:23.135 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:23.135 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=158f75ad893a7790231caf2f1cb3cd3de66bf3392f85d23b580c88a77bf61f0b 00:17:23.135 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:23.135 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.g8Y 00:17:23.135 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 158f75ad893a7790231caf2f1cb3cd3de66bf3392f85d23b580c88a77bf61f0b 3 00:17:23.135 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 158f75ad893a7790231caf2f1cb3cd3de66bf3392f85d23b580c88a77bf61f0b 3 00:17:23.135 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:23.135 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:23.135 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=158f75ad893a7790231caf2f1cb3cd3de66bf3392f85d23b580c88a77bf61f0b 00:17:23.135 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:17:23.135 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:23.135 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.g8Y 00:17:23.135 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.g8Y 00:17:23.135 08:30:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.g8Y 00:17:23.135 08:30:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:17:23.135 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:23.135 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:23.135 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:23.135 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:17:23.135 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:17:23.135 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:23.135 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=958210628622dfe3f5e1494c10137679159fc09a7c908cec 00:17:23.135 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:17:23.135 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Vbm 00:17:23.135 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 958210628622dfe3f5e1494c10137679159fc09a7c908cec 0 00:17:23.135 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 958210628622dfe3f5e1494c10137679159fc09a7c908cec 0 00:17:23.135 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:23.135 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:23.135 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=958210628622dfe3f5e1494c10137679159fc09a7c908cec 00:17:23.135 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:17:23.135 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:23.135 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Vbm 00:17:23.135 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Vbm 00:17:23.135 08:30:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.Vbm 00:17:23.135 08:30:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:17:23.135 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:23.135 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:23.135 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:23.135 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:17:23.135 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:17:23.135 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:23.135 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f602601ac54fb93387b3eee841d6fa8d4f5156803e5d5106 00:17:23.135 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:23.135 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.ppz 00:17:23.135 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f602601ac54fb93387b3eee841d6fa8d4f5156803e5d5106 2 00:17:23.135 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f602601ac54fb93387b3eee841d6fa8d4f5156803e5d5106 2 00:17:23.135 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:23.135 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:23.135 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f602601ac54fb93387b3eee841d6fa8d4f5156803e5d5106 00:17:23.135 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:17:23.135 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:23.135 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.ppz 00:17:23.135 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.ppz 00:17:23.135 08:30:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.ppz 00:17:23.135 08:30:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:17:23.135 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:23.135 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:23.135 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:23.135 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:17:23.135 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:17:23.135 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:23.135 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=0128a379db05834569274c3e9de66ba3 00:17:23.135 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:23.135 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.hQ2 00:17:23.135 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 0128a379db05834569274c3e9de66ba3 1 00:17:23.136 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 0128a379db05834569274c3e9de66ba3 1 00:17:23.136 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:23.136 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:23.136 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=0128a379db05834569274c3e9de66ba3 00:17:23.136 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:17:23.136 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:23.136 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.hQ2 00:17:23.136 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.hQ2 00:17:23.136 08:30:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.hQ2 00:17:23.136 08:30:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:17:23.136 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:23.136 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:23.136 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:23.136 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:17:23.136 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:17:23.136 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:23.394 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=0c9fa8a7439777a48ea830c06ed426a9 00:17:23.394 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:23.394 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.X5N 00:17:23.394 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 0c9fa8a7439777a48ea830c06ed426a9 1 00:17:23.394 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 0c9fa8a7439777a48ea830c06ed426a9 1 00:17:23.394 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:23.394 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:23.395 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=0c9fa8a7439777a48ea830c06ed426a9 00:17:23.395 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:17:23.395 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:23.395 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.X5N 00:17:23.395 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.X5N 00:17:23.395 08:30:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.X5N 00:17:23.395 08:30:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:17:23.395 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:23.395 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:23.395 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:23.395 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:17:23.395 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:17:23.395 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:23.395 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=73630e9240a81dd5b141745575b5ae41311ca37a47a27f27 00:17:23.395 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:23.395 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.zIM 00:17:23.395 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 73630e9240a81dd5b141745575b5ae41311ca37a47a27f27 2 00:17:23.395 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 73630e9240a81dd5b141745575b5ae41311ca37a47a27f27 2 00:17:23.395 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:23.395 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:23.395 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=73630e9240a81dd5b141745575b5ae41311ca37a47a27f27 00:17:23.395 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:17:23.395 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:23.395 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.zIM 00:17:23.395 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.zIM 00:17:23.395 08:30:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.zIM 00:17:23.395 08:30:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:17:23.395 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:23.395 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:23.395 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:23.395 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:17:23.395 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:17:23.395 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:23.395 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f1ea7b2aadf4a8ea26858b33b01f862b 00:17:23.395 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:17:23.395 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.nBb 00:17:23.395 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f1ea7b2aadf4a8ea26858b33b01f862b 0 00:17:23.395 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f1ea7b2aadf4a8ea26858b33b01f862b 0 00:17:23.395 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:23.395 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:23.395 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f1ea7b2aadf4a8ea26858b33b01f862b 00:17:23.395 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:17:23.395 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:23.395 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.nBb 00:17:23.395 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.nBb 00:17:23.395 08:30:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.nBb 00:17:23.395 08:30:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:17:23.395 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:23.395 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:23.395 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:23.395 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:17:23.395 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:17:23.395 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:23.395 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=0086252a376caf36073864d84a28bbe7689f470a0a7c001aada30e783b2385cb 00:17:23.395 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:23.395 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.HgU 00:17:23.395 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 0086252a376caf36073864d84a28bbe7689f470a0a7c001aada30e783b2385cb 3 00:17:23.395 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 0086252a376caf36073864d84a28bbe7689f470a0a7c001aada30e783b2385cb 3 00:17:23.395 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:23.395 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:23.395 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=0086252a376caf36073864d84a28bbe7689f470a0a7c001aada30e783b2385cb 00:17:23.395 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:17:23.395 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:23.653 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.HgU 00:17:23.653 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.HgU 00:17:23.653 08:30:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.HgU 00:17:23.653 08:30:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:17:23.653 08:30:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 78731 00:17:23.653 08:30:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 78731 ']' 00:17:23.653 08:30:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:23.653 08:30:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:23.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:23.653 08:30:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:23.653 08:30:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:23.653 08:30:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.913 08:30:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:23.913 08:30:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:17:23.913 08:30:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:23.913 08:30:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.pdX 00:17:23.913 08:30:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.913 08:30:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.913 08:30:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.913 08:30:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.g8Y ]] 00:17:23.913 08:30:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.g8Y 00:17:23.913 08:30:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.913 08:30:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.913 08:30:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.913 08:30:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:23.913 08:30:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.Vbm 00:17:23.913 08:30:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.913 08:30:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.913 08:30:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.913 08:30:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.ppz ]] 00:17:23.913 08:30:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ppz 00:17:23.913 08:30:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.913 08:30:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.913 08:30:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.913 08:30:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:23.913 08:30:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.hQ2 00:17:23.913 08:30:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.913 08:30:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.913 08:30:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.913 08:30:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.X5N ]] 00:17:23.913 08:30:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.X5N 00:17:23.913 08:30:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.913 08:30:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.913 08:30:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.913 08:30:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:23.913 08:30:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.zIM 00:17:23.913 08:30:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.913 08:30:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.913 08:30:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.913 08:30:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.nBb ]] 00:17:23.913 08:30:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.nBb 00:17:23.913 08:30:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.913 08:30:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.913 08:30:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.913 08:30:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:23.913 08:30:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.HgU 00:17:23.913 08:30:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.913 08:30:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.913 08:30:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.913 08:30:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:17:23.913 08:30:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:17:23.913 08:30:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:17:23.913 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:23.913 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:23.913 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:23.913 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:23.913 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:23.913 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:23.913 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:23.913 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:23.913 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:23.913 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:23.913 08:30:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:17:23.913 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:17:23.913 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:17:23.913 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:23.913 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:17:23.913 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:17:23.913 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:17:23.913 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:17:23.913 08:30:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:17:23.913 08:30:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:17:23.913 08:30:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:17:24.481 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:24.481 Waiting for block devices as requested 00:17:24.481 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:17:24.481 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:17:25.050 08:30:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:25.050 08:30:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:17:25.050 08:30:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:17:25.050 08:30:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:17:25.050 08:30:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:17:25.050 08:30:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:25.050 08:30:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:17:25.050 08:30:17 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:17:25.050 08:30:17 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:17:25.050 No valid GPT data, bailing 00:17:25.050 08:30:17 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:17:25.361 08:30:17 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:17:25.361 08:30:17 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:17:25.361 08:30:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:17:25.361 08:30:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:25.361 08:30:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:17:25.361 08:30:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:17:25.361 08:30:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:17:25.361 08:30:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:17:25.361 08:30:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:25.361 08:30:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:17:25.361 08:30:17 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:17:25.361 08:30:17 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:17:25.361 No valid GPT data, bailing 00:17:25.361 08:30:17 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:17:25.361 08:30:17 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:17:25.361 08:30:17 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:17:25.361 08:30:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:17:25.361 08:30:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:25.361 08:30:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:17:25.361 08:30:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:17:25.361 08:30:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:17:25.361 08:30:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:17:25.361 08:30:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:25.361 08:30:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:17:25.361 08:30:17 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:17:25.361 08:30:17 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:17:25.361 No valid GPT data, bailing 00:17:25.362 08:30:17 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:17:25.362 08:30:17 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:17:25.362 08:30:17 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:17:25.362 08:30:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:17:25.362 08:30:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:25.362 08:30:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:17:25.362 08:30:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:17:25.362 08:30:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:17:25.362 08:30:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:17:25.362 08:30:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:25.362 08:30:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:17:25.362 08:30:17 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:17:25.362 08:30:17 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:17:25.362 No valid GPT data, bailing 00:17:25.362 08:30:17 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:17:25.362 08:30:17 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:17:25.362 08:30:17 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:17:25.362 08:30:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:17:25.362 08:30:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:17:25.362 08:30:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:25.362 08:30:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:17:25.362 08:30:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:17:25.362 08:30:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:17:25.362 08:30:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:17:25.362 08:30:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:17:25.362 08:30:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:17:25.362 08:30:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:17:25.362 08:30:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:17:25.362 08:30:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:17:25.362 08:30:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:17:25.362 08:30:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:17:25.362 08:30:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --hostid=cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -a 10.0.0.1 -t tcp -s 4420 00:17:25.362 00:17:25.362 Discovery Log Number of Records 2, Generation counter 2 00:17:25.362 =====Discovery Log Entry 0====== 00:17:25.362 trtype: tcp 00:17:25.362 adrfam: ipv4 00:17:25.362 subtype: current discovery subsystem 00:17:25.362 treq: not specified, sq flow control disable supported 00:17:25.362 portid: 1 00:17:25.362 trsvcid: 4420 00:17:25.362 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:25.362 traddr: 10.0.0.1 00:17:25.362 eflags: none 00:17:25.362 sectype: none 00:17:25.362 =====Discovery Log Entry 1====== 00:17:25.362 trtype: tcp 00:17:25.362 adrfam: ipv4 00:17:25.362 subtype: nvme subsystem 00:17:25.362 treq: not specified, sq flow control disable supported 00:17:25.362 portid: 1 00:17:25.362 trsvcid: 4420 00:17:25.362 subnqn: nqn.2024-02.io.spdk:cnode0 00:17:25.362 traddr: 10.0.0.1 00:17:25.362 eflags: none 00:17:25.362 sectype: none 00:17:25.362 08:30:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:17:25.362 08:30:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:17:25.362 08:30:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:17:25.362 08:30:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:25.362 08:30:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:25.362 08:30:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:25.362 08:30:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:25.362 08:30:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:25.362 08:30:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTU4MjEwNjI4NjIyZGZlM2Y1ZTE0OTRjMTAxMzc2NzkxNTlmYzA5YTdjOTA4Y2VjlefXJQ==: 00:17:25.362 08:30:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjYwMjYwMWFjNTRmYjkzMzg3YjNlZWU4NDFkNmZhOGQ0ZjUxNTY4MDNlNWQ1MTA24MzLdw==: 00:17:25.362 08:30:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:25.362 08:30:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:25.621 08:30:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTU4MjEwNjI4NjIyZGZlM2Y1ZTE0OTRjMTAxMzc2NzkxNTlmYzA5YTdjOTA4Y2VjlefXJQ==: 00:17:25.621 08:30:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjYwMjYwMWFjNTRmYjkzMzg3YjNlZWU4NDFkNmZhOGQ0ZjUxNTY4MDNlNWQ1MTA24MzLdw==: ]] 00:17:25.621 08:30:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjYwMjYwMWFjNTRmYjkzMzg3YjNlZWU4NDFkNmZhOGQ0ZjUxNTY4MDNlNWQ1MTA24MzLdw==: 00:17:25.621 08:30:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:17:25.621 08:30:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:17:25.621 08:30:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:17:25.621 08:30:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:25.621 08:30:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:17:25.621 08:30:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:25.621 08:30:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:17:25.621 08:30:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:25.621 08:30:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:25.621 08:30:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:25.621 08:30:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:25.621 08:30:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.621 08:30:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.621 08:30:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.621 08:30:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:25.621 08:30:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:25.621 08:30:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:25.621 08:30:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:25.621 08:30:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:25.621 08:30:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:25.621 08:30:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:25.621 08:30:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:25.621 08:30:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:25.621 08:30:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:25.621 08:30:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:25.621 08:30:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:25.621 08:30:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.621 08:30:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.621 nvme0n1 00:17:25.621 08:30:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.621 08:30:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:25.621 08:30:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.621 08:30:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:25.621 08:30:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.621 08:30:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.881 08:30:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.881 08:30:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:25.881 08:30:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.881 08:30:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.881 08:30:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.881 08:30:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:17:25.881 08:30:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:25.881 08:30:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:25.881 08:30:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:17:25.881 08:30:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:25.881 08:30:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:25.881 08:30:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:25.881 08:30:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:25.881 08:30:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTIyNmY4MDQzMWRjOWNhMmUyNmY5NThlMTI4Y2EzZTL0cccQ: 00:17:25.881 08:30:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTU4Zjc1YWQ4OTNhNzc5MDIzMWNhZjJmMWNiM2NkM2RlNjZiZjMzOTJmODVkMjNiNTgwYzg4YTc3YmY2MWYwYuYzm4I=: 00:17:25.881 08:30:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:25.881 08:30:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:25.881 08:30:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTIyNmY4MDQzMWRjOWNhMmUyNmY5NThlMTI4Y2EzZTL0cccQ: 00:17:25.881 08:30:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTU4Zjc1YWQ4OTNhNzc5MDIzMWNhZjJmMWNiM2NkM2RlNjZiZjMzOTJmODVkMjNiNTgwYzg4YTc3YmY2MWYwYuYzm4I=: ]] 00:17:25.881 08:30:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTU4Zjc1YWQ4OTNhNzc5MDIzMWNhZjJmMWNiM2NkM2RlNjZiZjMzOTJmODVkMjNiNTgwYzg4YTc3YmY2MWYwYuYzm4I=: 00:17:25.881 08:30:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:17:25.881 08:30:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:25.881 08:30:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:25.881 08:30:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:25.881 08:30:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:25.881 08:30:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:25.881 08:30:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:25.881 08:30:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.881 08:30:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.881 08:30:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.881 08:30:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:25.881 08:30:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:25.881 08:30:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:25.881 08:30:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:25.881 08:30:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:25.881 08:30:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:25.881 08:30:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:25.881 08:30:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:25.881 08:30:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:25.881 08:30:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:25.881 08:30:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:25.881 08:30:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.881 08:30:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.881 08:30:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.882 nvme0n1 00:17:25.882 08:30:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.882 08:30:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:25.882 08:30:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.882 08:30:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:25.882 08:30:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.882 08:30:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.882 08:30:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.882 08:30:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:25.882 08:30:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.882 08:30:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.882 08:30:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.882 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:25.882 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:25.882 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:25.882 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:25.882 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:25.882 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:25.882 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTU4MjEwNjI4NjIyZGZlM2Y1ZTE0OTRjMTAxMzc2NzkxNTlmYzA5YTdjOTA4Y2VjlefXJQ==: 00:17:25.882 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjYwMjYwMWFjNTRmYjkzMzg3YjNlZWU4NDFkNmZhOGQ0ZjUxNTY4MDNlNWQ1MTA24MzLdw==: 00:17:25.882 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:25.882 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:25.882 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTU4MjEwNjI4NjIyZGZlM2Y1ZTE0OTRjMTAxMzc2NzkxNTlmYzA5YTdjOTA4Y2VjlefXJQ==: 00:17:25.882 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjYwMjYwMWFjNTRmYjkzMzg3YjNlZWU4NDFkNmZhOGQ0ZjUxNTY4MDNlNWQ1MTA24MzLdw==: ]] 00:17:25.882 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjYwMjYwMWFjNTRmYjkzMzg3YjNlZWU4NDFkNmZhOGQ0ZjUxNTY4MDNlNWQ1MTA24MzLdw==: 00:17:25.882 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:17:25.882 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:25.882 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:25.882 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:25.882 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:25.882 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:25.882 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:25.882 08:30:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.882 08:30:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.882 08:30:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.882 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:25.882 08:30:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:25.882 08:30:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:25.882 08:30:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:25.882 08:30:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:25.882 08:30:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:25.882 08:30:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:25.882 08:30:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:25.882 08:30:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:25.882 08:30:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:25.882 08:30:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:25.882 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:25.882 08:30:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.882 08:30:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.142 nvme0n1 00:17:26.142 08:30:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.142 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:26.142 08:30:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.142 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:26.142 08:30:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.142 08:30:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.142 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.142 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:26.142 08:30:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.142 08:30:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.142 08:30:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.142 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:26.142 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:17:26.142 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:26.142 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:26.142 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:26.142 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:26.142 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDEyOGEzNzlkYjA1ODM0NTY5Mjc0YzNlOWRlNjZiYTNMXQeb: 00:17:26.142 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGM5ZmE4YTc0Mzk3NzdhNDhlYTgzMGMwNmVkNDI2YTnROFMU: 00:17:26.142 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:26.142 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:26.142 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDEyOGEzNzlkYjA1ODM0NTY5Mjc0YzNlOWRlNjZiYTNMXQeb: 00:17:26.142 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGM5ZmE4YTc0Mzk3NzdhNDhlYTgzMGMwNmVkNDI2YTnROFMU: ]] 00:17:26.142 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGM5ZmE4YTc0Mzk3NzdhNDhlYTgzMGMwNmVkNDI2YTnROFMU: 00:17:26.142 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:17:26.142 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:26.142 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:26.142 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:26.143 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:26.143 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:26.143 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:26.143 08:30:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.143 08:30:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.143 08:30:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.143 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:26.143 08:30:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:26.143 08:30:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:26.143 08:30:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:26.143 08:30:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:26.143 08:30:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:26.143 08:30:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:26.143 08:30:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:26.143 08:30:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:26.143 08:30:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:26.143 08:30:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:26.143 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.143 08:30:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.143 08:30:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.143 nvme0n1 00:17:26.143 08:30:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.143 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:26.143 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:26.143 08:30:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.143 08:30:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.402 08:30:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.402 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.402 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:26.402 08:30:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.402 08:30:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.402 08:30:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.402 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:26.402 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:17:26.402 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:26.402 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:26.402 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:26.402 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:26.402 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzM2MzBlOTI0MGE4MWRkNWIxNDE3NDU1NzViNWFlNDEzMTFjYTM3YTQ3YTI3ZjI3YBdl8A==: 00:17:26.402 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjFlYTdiMmFhZGY0YThlYTI2ODU4YjMzYjAxZjg2MmLCuZ92: 00:17:26.402 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:26.402 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:26.402 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzM2MzBlOTI0MGE4MWRkNWIxNDE3NDU1NzViNWFlNDEzMTFjYTM3YTQ3YTI3ZjI3YBdl8A==: 00:17:26.402 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjFlYTdiMmFhZGY0YThlYTI2ODU4YjMzYjAxZjg2MmLCuZ92: ]] 00:17:26.402 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjFlYTdiMmFhZGY0YThlYTI2ODU4YjMzYjAxZjg2MmLCuZ92: 00:17:26.402 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:17:26.402 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:26.402 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:26.402 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:26.402 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:26.403 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:26.403 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:26.403 08:30:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.403 08:30:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.403 08:30:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.403 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:26.403 08:30:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:26.403 08:30:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:26.403 08:30:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:26.403 08:30:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:26.403 08:30:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:26.403 08:30:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:26.403 08:30:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:26.403 08:30:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:26.403 08:30:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:26.403 08:30:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:26.403 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:26.403 08:30:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.403 08:30:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.403 nvme0n1 00:17:26.403 08:30:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.403 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:26.403 08:30:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.403 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:26.403 08:30:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.403 08:30:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.403 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.403 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:26.403 08:30:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.403 08:30:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.403 08:30:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.403 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:26.403 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:17:26.403 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:26.403 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:26.403 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:26.403 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:26.403 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDA4NjI1MmEzNzZjYWYzNjA3Mzg2NGQ4NGEyOGJiZTc2ODlmNDcwYTBhN2MwMDFhYWRhMzBlNzgzYjIzODVjYlpmpJs=: 00:17:26.403 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:26.403 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:26.403 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:26.403 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDA4NjI1MmEzNzZjYWYzNjA3Mzg2NGQ4NGEyOGJiZTc2ODlmNDcwYTBhN2MwMDFhYWRhMzBlNzgzYjIzODVjYlpmpJs=: 00:17:26.403 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:26.403 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:17:26.403 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:26.403 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:26.403 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:26.403 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:26.403 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:26.403 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:26.403 08:30:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.403 08:30:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.403 08:30:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.403 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:26.403 08:30:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:26.403 08:30:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:26.403 08:30:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:26.403 08:30:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:26.403 08:30:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:26.403 08:30:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:26.403 08:30:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:26.403 08:30:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:26.403 08:30:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:26.403 08:30:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:26.403 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:26.403 08:30:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.403 08:30:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.661 nvme0n1 00:17:26.662 08:30:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.662 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:26.662 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:26.662 08:30:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.662 08:30:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.662 08:30:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.662 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.662 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:26.662 08:30:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.662 08:30:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.662 08:30:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.662 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:26.662 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:26.662 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:17:26.662 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:26.662 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:26.662 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:26.662 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:26.662 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTIyNmY4MDQzMWRjOWNhMmUyNmY5NThlMTI4Y2EzZTL0cccQ: 00:17:26.662 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTU4Zjc1YWQ4OTNhNzc5MDIzMWNhZjJmMWNiM2NkM2RlNjZiZjMzOTJmODVkMjNiNTgwYzg4YTc3YmY2MWYwYuYzm4I=: 00:17:26.662 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:26.662 08:30:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:26.920 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTIyNmY4MDQzMWRjOWNhMmUyNmY5NThlMTI4Y2EzZTL0cccQ: 00:17:26.920 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTU4Zjc1YWQ4OTNhNzc5MDIzMWNhZjJmMWNiM2NkM2RlNjZiZjMzOTJmODVkMjNiNTgwYzg4YTc3YmY2MWYwYuYzm4I=: ]] 00:17:26.920 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTU4Zjc1YWQ4OTNhNzc5MDIzMWNhZjJmMWNiM2NkM2RlNjZiZjMzOTJmODVkMjNiNTgwYzg4YTc3YmY2MWYwYuYzm4I=: 00:17:26.920 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:17:26.920 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:26.920 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:26.920 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:26.920 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:26.920 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:26.920 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:26.920 08:30:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.920 08:30:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.920 08:30:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.920 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:26.920 08:30:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:26.920 08:30:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:26.920 08:30:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:26.920 08:30:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:26.920 08:30:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:26.920 08:30:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:26.920 08:30:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:26.920 08:30:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:26.920 08:30:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:26.920 08:30:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:26.920 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:26.920 08:30:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.920 08:30:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.179 nvme0n1 00:17:27.179 08:30:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.179 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:27.179 08:30:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.179 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:27.179 08:30:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.179 08:30:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.179 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.179 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:27.179 08:30:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.179 08:30:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.179 08:30:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.179 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:27.179 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:17:27.179 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:27.179 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:27.179 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:27.179 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:27.179 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTU4MjEwNjI4NjIyZGZlM2Y1ZTE0OTRjMTAxMzc2NzkxNTlmYzA5YTdjOTA4Y2VjlefXJQ==: 00:17:27.179 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjYwMjYwMWFjNTRmYjkzMzg3YjNlZWU4NDFkNmZhOGQ0ZjUxNTY4MDNlNWQ1MTA24MzLdw==: 00:17:27.179 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:27.179 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:27.179 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTU4MjEwNjI4NjIyZGZlM2Y1ZTE0OTRjMTAxMzc2NzkxNTlmYzA5YTdjOTA4Y2VjlefXJQ==: 00:17:27.179 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjYwMjYwMWFjNTRmYjkzMzg3YjNlZWU4NDFkNmZhOGQ0ZjUxNTY4MDNlNWQ1MTA24MzLdw==: ]] 00:17:27.179 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjYwMjYwMWFjNTRmYjkzMzg3YjNlZWU4NDFkNmZhOGQ0ZjUxNTY4MDNlNWQ1MTA24MzLdw==: 00:17:27.179 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:17:27.179 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:27.179 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:27.179 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:27.179 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:27.179 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:27.179 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:27.179 08:30:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.179 08:30:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.179 08:30:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.179 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:27.179 08:30:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:27.179 08:30:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:27.179 08:30:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:27.179 08:30:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:27.179 08:30:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:27.179 08:30:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:27.179 08:30:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:27.179 08:30:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:27.179 08:30:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:27.179 08:30:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:27.179 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.179 08:30:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.179 08:30:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.441 nvme0n1 00:17:27.441 08:30:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.441 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:27.441 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:27.441 08:30:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.441 08:30:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.441 08:30:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.441 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.441 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:27.441 08:30:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.441 08:30:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.441 08:30:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.441 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:27.441 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:17:27.441 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:27.441 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:27.441 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:27.441 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:27.441 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDEyOGEzNzlkYjA1ODM0NTY5Mjc0YzNlOWRlNjZiYTNMXQeb: 00:17:27.441 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGM5ZmE4YTc0Mzk3NzdhNDhlYTgzMGMwNmVkNDI2YTnROFMU: 00:17:27.441 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:27.441 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:27.441 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDEyOGEzNzlkYjA1ODM0NTY5Mjc0YzNlOWRlNjZiYTNMXQeb: 00:17:27.441 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGM5ZmE4YTc0Mzk3NzdhNDhlYTgzMGMwNmVkNDI2YTnROFMU: ]] 00:17:27.441 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGM5ZmE4YTc0Mzk3NzdhNDhlYTgzMGMwNmVkNDI2YTnROFMU: 00:17:27.441 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:17:27.441 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:27.441 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:27.441 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:27.441 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:27.441 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:27.441 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:27.441 08:30:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.441 08:30:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.441 08:30:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.441 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:27.441 08:30:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:27.441 08:30:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:27.441 08:30:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:27.441 08:30:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:27.441 08:30:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:27.441 08:30:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:27.441 08:30:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:27.441 08:30:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:27.441 08:30:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:27.441 08:30:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:27.441 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:27.441 08:30:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.441 08:30:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.705 nvme0n1 00:17:27.705 08:30:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.705 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:27.705 08:30:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.705 08:30:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.705 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:27.705 08:30:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.705 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.705 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:27.705 08:30:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.705 08:30:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.705 08:30:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.705 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:27.705 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:17:27.705 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:27.705 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:27.705 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:27.705 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:27.705 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzM2MzBlOTI0MGE4MWRkNWIxNDE3NDU1NzViNWFlNDEzMTFjYTM3YTQ3YTI3ZjI3YBdl8A==: 00:17:27.705 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjFlYTdiMmFhZGY0YThlYTI2ODU4YjMzYjAxZjg2MmLCuZ92: 00:17:27.705 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:27.705 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:27.705 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzM2MzBlOTI0MGE4MWRkNWIxNDE3NDU1NzViNWFlNDEzMTFjYTM3YTQ3YTI3ZjI3YBdl8A==: 00:17:27.705 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjFlYTdiMmFhZGY0YThlYTI2ODU4YjMzYjAxZjg2MmLCuZ92: ]] 00:17:27.705 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjFlYTdiMmFhZGY0YThlYTI2ODU4YjMzYjAxZjg2MmLCuZ92: 00:17:27.705 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:17:27.705 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:27.705 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:27.705 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:27.705 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:27.705 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:27.705 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:27.705 08:30:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.705 08:30:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.705 08:30:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.705 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:27.705 08:30:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:27.705 08:30:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:27.705 08:30:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:27.705 08:30:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:27.705 08:30:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:27.705 08:30:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:27.705 08:30:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:27.705 08:30:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:27.705 08:30:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:27.705 08:30:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:27.705 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:27.705 08:30:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.705 08:30:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.705 nvme0n1 00:17:27.705 08:30:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.705 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:27.705 08:30:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.705 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:27.705 08:30:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.705 08:30:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.964 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.964 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:27.964 08:30:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.964 08:30:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.964 08:30:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.964 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:27.964 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:17:27.964 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:27.964 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:27.964 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:27.964 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:27.964 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDA4NjI1MmEzNzZjYWYzNjA3Mzg2NGQ4NGEyOGJiZTc2ODlmNDcwYTBhN2MwMDFhYWRhMzBlNzgzYjIzODVjYlpmpJs=: 00:17:27.964 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:27.964 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:27.964 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:27.964 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDA4NjI1MmEzNzZjYWYzNjA3Mzg2NGQ4NGEyOGJiZTc2ODlmNDcwYTBhN2MwMDFhYWRhMzBlNzgzYjIzODVjYlpmpJs=: 00:17:27.964 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:27.964 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:17:27.964 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:27.964 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:27.964 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:27.964 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:27.964 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:27.964 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:27.964 08:30:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.964 08:30:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.964 08:30:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.964 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:27.964 08:30:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:27.964 08:30:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:27.964 08:30:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:27.964 08:30:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:27.964 08:30:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:27.964 08:30:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:27.964 08:30:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:27.964 08:30:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:27.964 08:30:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:27.964 08:30:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:27.964 08:30:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:27.964 08:30:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.964 08:30:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.964 nvme0n1 00:17:27.964 08:30:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.964 08:30:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:27.964 08:30:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.964 08:30:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:27.964 08:30:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.964 08:30:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.964 08:30:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.964 08:30:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:27.964 08:30:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.964 08:30:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.222 08:30:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.222 08:30:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:28.222 08:30:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:28.222 08:30:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:17:28.222 08:30:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:28.222 08:30:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:28.222 08:30:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:28.222 08:30:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:28.222 08:30:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTIyNmY4MDQzMWRjOWNhMmUyNmY5NThlMTI4Y2EzZTL0cccQ: 00:17:28.222 08:30:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTU4Zjc1YWQ4OTNhNzc5MDIzMWNhZjJmMWNiM2NkM2RlNjZiZjMzOTJmODVkMjNiNTgwYzg4YTc3YmY2MWYwYuYzm4I=: 00:17:28.222 08:30:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:28.222 08:30:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:28.790 08:30:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTIyNmY4MDQzMWRjOWNhMmUyNmY5NThlMTI4Y2EzZTL0cccQ: 00:17:28.790 08:30:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTU4Zjc1YWQ4OTNhNzc5MDIzMWNhZjJmMWNiM2NkM2RlNjZiZjMzOTJmODVkMjNiNTgwYzg4YTc3YmY2MWYwYuYzm4I=: ]] 00:17:28.790 08:30:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTU4Zjc1YWQ4OTNhNzc5MDIzMWNhZjJmMWNiM2NkM2RlNjZiZjMzOTJmODVkMjNiNTgwYzg4YTc3YmY2MWYwYuYzm4I=: 00:17:28.790 08:30:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:17:28.790 08:30:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:28.790 08:30:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:28.790 08:30:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:28.790 08:30:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:28.790 08:30:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:28.790 08:30:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:28.790 08:30:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.790 08:30:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.790 08:30:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.790 08:30:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:28.790 08:30:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:28.790 08:30:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:28.790 08:30:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:28.790 08:30:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:28.790 08:30:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:28.790 08:30:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:28.790 08:30:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:28.790 08:30:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:28.790 08:30:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:28.790 08:30:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:28.790 08:30:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:28.790 08:30:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.790 08:30:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.049 nvme0n1 00:17:29.049 08:30:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.049 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:29.049 08:30:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.049 08:30:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.049 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:29.049 08:30:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.049 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.049 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:29.049 08:30:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.049 08:30:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.049 08:30:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.049 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:29.049 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:17:29.049 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:29.049 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:29.049 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:29.049 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:29.049 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTU4MjEwNjI4NjIyZGZlM2Y1ZTE0OTRjMTAxMzc2NzkxNTlmYzA5YTdjOTA4Y2VjlefXJQ==: 00:17:29.049 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjYwMjYwMWFjNTRmYjkzMzg3YjNlZWU4NDFkNmZhOGQ0ZjUxNTY4MDNlNWQ1MTA24MzLdw==: 00:17:29.049 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:29.049 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:29.049 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTU4MjEwNjI4NjIyZGZlM2Y1ZTE0OTRjMTAxMzc2NzkxNTlmYzA5YTdjOTA4Y2VjlefXJQ==: 00:17:29.049 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjYwMjYwMWFjNTRmYjkzMzg3YjNlZWU4NDFkNmZhOGQ0ZjUxNTY4MDNlNWQ1MTA24MzLdw==: ]] 00:17:29.049 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjYwMjYwMWFjNTRmYjkzMzg3YjNlZWU4NDFkNmZhOGQ0ZjUxNTY4MDNlNWQ1MTA24MzLdw==: 00:17:29.049 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:17:29.049 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:29.049 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:29.049 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:29.049 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:29.049 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:29.049 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:29.049 08:30:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.049 08:30:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.049 08:30:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.049 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:29.049 08:30:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:29.049 08:30:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:29.049 08:30:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:29.049 08:30:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:29.049 08:30:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:29.049 08:30:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:29.049 08:30:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:29.049 08:30:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:29.049 08:30:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:29.049 08:30:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:29.049 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:29.049 08:30:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.049 08:30:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.308 nvme0n1 00:17:29.308 08:30:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.308 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:29.308 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:29.308 08:30:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.308 08:30:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.308 08:30:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.308 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.308 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:29.308 08:30:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.308 08:30:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.308 08:30:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.308 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:29.308 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:17:29.308 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:29.308 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:29.308 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:29.308 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:29.308 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDEyOGEzNzlkYjA1ODM0NTY5Mjc0YzNlOWRlNjZiYTNMXQeb: 00:17:29.308 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGM5ZmE4YTc0Mzk3NzdhNDhlYTgzMGMwNmVkNDI2YTnROFMU: 00:17:29.308 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:29.308 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:29.308 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDEyOGEzNzlkYjA1ODM0NTY5Mjc0YzNlOWRlNjZiYTNMXQeb: 00:17:29.308 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGM5ZmE4YTc0Mzk3NzdhNDhlYTgzMGMwNmVkNDI2YTnROFMU: ]] 00:17:29.308 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGM5ZmE4YTc0Mzk3NzdhNDhlYTgzMGMwNmVkNDI2YTnROFMU: 00:17:29.308 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:17:29.308 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:29.308 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:29.308 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:29.308 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:29.308 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:29.308 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:29.308 08:30:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.308 08:30:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.308 08:30:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.308 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:29.308 08:30:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:29.308 08:30:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:29.308 08:30:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:29.308 08:30:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:29.308 08:30:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:29.308 08:30:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:29.308 08:30:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:29.308 08:30:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:29.308 08:30:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:29.308 08:30:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:29.308 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:29.308 08:30:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.308 08:30:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.595 nvme0n1 00:17:29.595 08:30:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.595 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:29.595 08:30:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.595 08:30:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.595 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:29.595 08:30:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.595 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.595 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:29.595 08:30:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.595 08:30:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.595 08:30:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.595 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:29.595 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:17:29.595 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:29.595 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:29.595 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:29.595 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:29.595 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzM2MzBlOTI0MGE4MWRkNWIxNDE3NDU1NzViNWFlNDEzMTFjYTM3YTQ3YTI3ZjI3YBdl8A==: 00:17:29.595 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjFlYTdiMmFhZGY0YThlYTI2ODU4YjMzYjAxZjg2MmLCuZ92: 00:17:29.595 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:29.595 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:29.595 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzM2MzBlOTI0MGE4MWRkNWIxNDE3NDU1NzViNWFlNDEzMTFjYTM3YTQ3YTI3ZjI3YBdl8A==: 00:17:29.595 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjFlYTdiMmFhZGY0YThlYTI2ODU4YjMzYjAxZjg2MmLCuZ92: ]] 00:17:29.595 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjFlYTdiMmFhZGY0YThlYTI2ODU4YjMzYjAxZjg2MmLCuZ92: 00:17:29.595 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:17:29.595 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:29.595 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:29.595 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:29.595 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:29.595 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:29.595 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:29.595 08:30:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.595 08:30:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.595 08:30:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.595 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:29.595 08:30:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:29.595 08:30:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:29.595 08:30:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:29.595 08:30:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:29.595 08:30:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:29.595 08:30:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:29.595 08:30:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:29.595 08:30:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:29.595 08:30:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:29.595 08:30:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:29.595 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:29.595 08:30:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.595 08:30:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.853 nvme0n1 00:17:29.853 08:30:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.853 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:29.853 08:30:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.853 08:30:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.853 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:29.853 08:30:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.853 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.853 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:29.853 08:30:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.853 08:30:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.853 08:30:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.853 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:29.853 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:17:29.853 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:29.854 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:29.854 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:29.854 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:29.854 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDA4NjI1MmEzNzZjYWYzNjA3Mzg2NGQ4NGEyOGJiZTc2ODlmNDcwYTBhN2MwMDFhYWRhMzBlNzgzYjIzODVjYlpmpJs=: 00:17:29.854 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:29.854 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:29.854 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:29.854 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDA4NjI1MmEzNzZjYWYzNjA3Mzg2NGQ4NGEyOGJiZTc2ODlmNDcwYTBhN2MwMDFhYWRhMzBlNzgzYjIzODVjYlpmpJs=: 00:17:29.854 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:29.854 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:17:29.854 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:29.854 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:29.854 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:29.854 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:29.854 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:29.854 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:29.854 08:30:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.854 08:30:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.854 08:30:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.854 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:29.854 08:30:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:29.854 08:30:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:29.854 08:30:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:29.854 08:30:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:29.854 08:30:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:29.854 08:30:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:29.854 08:30:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:29.854 08:30:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:29.854 08:30:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:29.854 08:30:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:29.854 08:30:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:29.854 08:30:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.854 08:30:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.112 nvme0n1 00:17:30.112 08:30:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.112 08:30:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:30.112 08:30:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.112 08:30:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:30.112 08:30:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.112 08:30:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.112 08:30:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.112 08:30:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:30.112 08:30:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.112 08:30:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.112 08:30:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.112 08:30:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:30.112 08:30:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:30.112 08:30:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:17:30.112 08:30:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:30.112 08:30:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:30.112 08:30:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:30.112 08:30:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:30.112 08:30:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTIyNmY4MDQzMWRjOWNhMmUyNmY5NThlMTI4Y2EzZTL0cccQ: 00:17:30.112 08:30:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTU4Zjc1YWQ4OTNhNzc5MDIzMWNhZjJmMWNiM2NkM2RlNjZiZjMzOTJmODVkMjNiNTgwYzg4YTc3YmY2MWYwYuYzm4I=: 00:17:30.112 08:30:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:30.112 08:30:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:32.012 08:30:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTIyNmY4MDQzMWRjOWNhMmUyNmY5NThlMTI4Y2EzZTL0cccQ: 00:17:32.012 08:30:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTU4Zjc1YWQ4OTNhNzc5MDIzMWNhZjJmMWNiM2NkM2RlNjZiZjMzOTJmODVkMjNiNTgwYzg4YTc3YmY2MWYwYuYzm4I=: ]] 00:17:32.012 08:30:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTU4Zjc1YWQ4OTNhNzc5MDIzMWNhZjJmMWNiM2NkM2RlNjZiZjMzOTJmODVkMjNiNTgwYzg4YTc3YmY2MWYwYuYzm4I=: 00:17:32.012 08:30:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:17:32.012 08:30:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:32.012 08:30:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:32.012 08:30:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:32.012 08:30:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:32.012 08:30:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:32.012 08:30:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:32.012 08:30:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.012 08:30:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.012 08:30:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.013 08:30:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:32.013 08:30:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:32.013 08:30:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:32.013 08:30:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:32.013 08:30:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:32.013 08:30:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:32.013 08:30:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:32.013 08:30:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:32.013 08:30:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:32.013 08:30:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:32.013 08:30:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:32.013 08:30:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:32.013 08:30:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.013 08:30:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.270 nvme0n1 00:17:32.270 08:30:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.270 08:30:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:32.270 08:30:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.270 08:30:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:32.270 08:30:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.529 08:30:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.529 08:30:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.529 08:30:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:32.529 08:30:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.529 08:30:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.529 08:30:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.529 08:30:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:32.529 08:30:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:17:32.529 08:30:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:32.529 08:30:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:32.529 08:30:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:32.529 08:30:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:32.529 08:30:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTU4MjEwNjI4NjIyZGZlM2Y1ZTE0OTRjMTAxMzc2NzkxNTlmYzA5YTdjOTA4Y2VjlefXJQ==: 00:17:32.529 08:30:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjYwMjYwMWFjNTRmYjkzMzg3YjNlZWU4NDFkNmZhOGQ0ZjUxNTY4MDNlNWQ1MTA24MzLdw==: 00:17:32.529 08:30:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:32.529 08:30:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:32.529 08:30:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTU4MjEwNjI4NjIyZGZlM2Y1ZTE0OTRjMTAxMzc2NzkxNTlmYzA5YTdjOTA4Y2VjlefXJQ==: 00:17:32.529 08:30:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjYwMjYwMWFjNTRmYjkzMzg3YjNlZWU4NDFkNmZhOGQ0ZjUxNTY4MDNlNWQ1MTA24MzLdw==: ]] 00:17:32.529 08:30:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjYwMjYwMWFjNTRmYjkzMzg3YjNlZWU4NDFkNmZhOGQ0ZjUxNTY4MDNlNWQ1MTA24MzLdw==: 00:17:32.529 08:30:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:17:32.529 08:30:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:32.529 08:30:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:32.529 08:30:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:32.529 08:30:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:32.529 08:30:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:32.529 08:30:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:32.529 08:30:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.529 08:30:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.529 08:30:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.529 08:30:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:32.529 08:30:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:32.529 08:30:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:32.529 08:30:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:32.529 08:30:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:32.529 08:30:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:32.529 08:30:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:32.529 08:30:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:32.529 08:30:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:32.529 08:30:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:32.529 08:30:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:32.529 08:30:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.529 08:30:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.529 08:30:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.787 nvme0n1 00:17:32.788 08:30:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.788 08:30:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:32.788 08:30:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:32.788 08:30:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.788 08:30:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.788 08:30:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.788 08:30:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.788 08:30:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:32.788 08:30:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.788 08:30:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.788 08:30:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.788 08:30:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:32.788 08:30:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:17:32.788 08:30:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:32.788 08:30:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:32.788 08:30:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:32.788 08:30:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:33.047 08:30:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDEyOGEzNzlkYjA1ODM0NTY5Mjc0YzNlOWRlNjZiYTNMXQeb: 00:17:33.047 08:30:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGM5ZmE4YTc0Mzk3NzdhNDhlYTgzMGMwNmVkNDI2YTnROFMU: 00:17:33.047 08:30:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:33.047 08:30:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:33.047 08:30:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDEyOGEzNzlkYjA1ODM0NTY5Mjc0YzNlOWRlNjZiYTNMXQeb: 00:17:33.047 08:30:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGM5ZmE4YTc0Mzk3NzdhNDhlYTgzMGMwNmVkNDI2YTnROFMU: ]] 00:17:33.047 08:30:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGM5ZmE4YTc0Mzk3NzdhNDhlYTgzMGMwNmVkNDI2YTnROFMU: 00:17:33.047 08:30:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:17:33.047 08:30:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:33.047 08:30:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:33.047 08:30:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:33.047 08:30:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:33.047 08:30:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:33.047 08:30:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:33.047 08:30:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.047 08:30:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.047 08:30:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.047 08:30:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:33.047 08:30:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:33.047 08:30:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:33.047 08:30:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:33.047 08:30:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:33.047 08:30:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:33.047 08:30:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:33.047 08:30:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:33.047 08:30:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:33.047 08:30:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:33.047 08:30:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:33.047 08:30:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:33.047 08:30:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.047 08:30:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.306 nvme0n1 00:17:33.306 08:30:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.306 08:30:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:33.306 08:30:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.306 08:30:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:33.306 08:30:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.306 08:30:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.306 08:30:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.306 08:30:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:33.306 08:30:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.306 08:30:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.306 08:30:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.306 08:30:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:33.306 08:30:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:17:33.306 08:30:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:33.306 08:30:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:33.306 08:30:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:33.306 08:30:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:33.306 08:30:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzM2MzBlOTI0MGE4MWRkNWIxNDE3NDU1NzViNWFlNDEzMTFjYTM3YTQ3YTI3ZjI3YBdl8A==: 00:17:33.306 08:30:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjFlYTdiMmFhZGY0YThlYTI2ODU4YjMzYjAxZjg2MmLCuZ92: 00:17:33.306 08:30:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:33.306 08:30:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:33.306 08:30:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzM2MzBlOTI0MGE4MWRkNWIxNDE3NDU1NzViNWFlNDEzMTFjYTM3YTQ3YTI3ZjI3YBdl8A==: 00:17:33.306 08:30:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjFlYTdiMmFhZGY0YThlYTI2ODU4YjMzYjAxZjg2MmLCuZ92: ]] 00:17:33.306 08:30:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjFlYTdiMmFhZGY0YThlYTI2ODU4YjMzYjAxZjg2MmLCuZ92: 00:17:33.306 08:30:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:17:33.306 08:30:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:33.306 08:30:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:33.306 08:30:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:33.306 08:30:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:33.306 08:30:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:33.306 08:30:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:33.306 08:30:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.306 08:30:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.306 08:30:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.306 08:30:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:33.306 08:30:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:33.306 08:30:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:33.306 08:30:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:33.306 08:30:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:33.306 08:30:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:33.306 08:30:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:33.306 08:30:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:33.306 08:30:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:33.306 08:30:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:33.306 08:30:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:33.306 08:30:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:33.306 08:30:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.306 08:30:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.885 nvme0n1 00:17:33.885 08:30:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.885 08:30:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:33.885 08:30:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.885 08:30:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.885 08:30:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:33.885 08:30:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.885 08:30:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.885 08:30:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:33.885 08:30:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.885 08:30:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.885 08:30:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.885 08:30:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:33.885 08:30:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:17:33.885 08:30:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:33.885 08:30:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:33.885 08:30:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:33.885 08:30:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:33.885 08:30:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDA4NjI1MmEzNzZjYWYzNjA3Mzg2NGQ4NGEyOGJiZTc2ODlmNDcwYTBhN2MwMDFhYWRhMzBlNzgzYjIzODVjYlpmpJs=: 00:17:33.885 08:30:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:33.885 08:30:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:33.885 08:30:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:33.885 08:30:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDA4NjI1MmEzNzZjYWYzNjA3Mzg2NGQ4NGEyOGJiZTc2ODlmNDcwYTBhN2MwMDFhYWRhMzBlNzgzYjIzODVjYlpmpJs=: 00:17:33.885 08:30:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:33.885 08:30:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:17:33.885 08:30:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:33.885 08:30:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:33.885 08:30:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:33.885 08:30:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:33.885 08:30:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:33.885 08:30:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:33.885 08:30:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.885 08:30:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.885 08:30:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.885 08:30:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:33.885 08:30:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:33.885 08:30:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:33.885 08:30:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:33.885 08:30:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:33.885 08:30:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:33.885 08:30:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:33.885 08:30:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:33.885 08:30:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:33.885 08:30:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:33.885 08:30:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:33.886 08:30:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:33.886 08:30:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.886 08:30:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.144 nvme0n1 00:17:34.144 08:30:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.144 08:30:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:34.144 08:30:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.144 08:30:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:34.144 08:30:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.144 08:30:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.144 08:30:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.144 08:30:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:34.144 08:30:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.144 08:30:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.403 08:30:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.403 08:30:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:34.403 08:30:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:34.403 08:30:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:17:34.403 08:30:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:34.403 08:30:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:34.403 08:30:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:34.403 08:30:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:34.403 08:30:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTIyNmY4MDQzMWRjOWNhMmUyNmY5NThlMTI4Y2EzZTL0cccQ: 00:17:34.403 08:30:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTU4Zjc1YWQ4OTNhNzc5MDIzMWNhZjJmMWNiM2NkM2RlNjZiZjMzOTJmODVkMjNiNTgwYzg4YTc3YmY2MWYwYuYzm4I=: 00:17:34.403 08:30:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:34.403 08:30:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:34.403 08:30:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTIyNmY4MDQzMWRjOWNhMmUyNmY5NThlMTI4Y2EzZTL0cccQ: 00:17:34.403 08:30:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTU4Zjc1YWQ4OTNhNzc5MDIzMWNhZjJmMWNiM2NkM2RlNjZiZjMzOTJmODVkMjNiNTgwYzg4YTc3YmY2MWYwYuYzm4I=: ]] 00:17:34.403 08:30:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTU4Zjc1YWQ4OTNhNzc5MDIzMWNhZjJmMWNiM2NkM2RlNjZiZjMzOTJmODVkMjNiNTgwYzg4YTc3YmY2MWYwYuYzm4I=: 00:17:34.403 08:30:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:17:34.403 08:30:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:34.403 08:30:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:34.403 08:30:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:34.403 08:30:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:34.403 08:30:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:34.403 08:30:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:34.403 08:30:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.403 08:30:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.403 08:30:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.403 08:30:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:34.403 08:30:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:34.403 08:30:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:34.403 08:30:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:34.404 08:30:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:34.404 08:30:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:34.404 08:30:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:34.404 08:30:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:34.404 08:30:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:34.404 08:30:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:34.404 08:30:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:34.404 08:30:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:34.404 08:30:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.404 08:30:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.971 nvme0n1 00:17:34.971 08:30:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.971 08:30:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:34.971 08:30:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.971 08:30:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.971 08:30:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:34.971 08:30:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.971 08:30:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.971 08:30:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:34.971 08:30:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.971 08:30:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.971 08:30:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.971 08:30:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:34.971 08:30:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:17:34.971 08:30:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:34.971 08:30:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:34.971 08:30:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:34.971 08:30:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:34.971 08:30:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTU4MjEwNjI4NjIyZGZlM2Y1ZTE0OTRjMTAxMzc2NzkxNTlmYzA5YTdjOTA4Y2VjlefXJQ==: 00:17:34.971 08:30:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjYwMjYwMWFjNTRmYjkzMzg3YjNlZWU4NDFkNmZhOGQ0ZjUxNTY4MDNlNWQ1MTA24MzLdw==: 00:17:34.971 08:30:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:34.971 08:30:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:34.971 08:30:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTU4MjEwNjI4NjIyZGZlM2Y1ZTE0OTRjMTAxMzc2NzkxNTlmYzA5YTdjOTA4Y2VjlefXJQ==: 00:17:34.971 08:30:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjYwMjYwMWFjNTRmYjkzMzg3YjNlZWU4NDFkNmZhOGQ0ZjUxNTY4MDNlNWQ1MTA24MzLdw==: ]] 00:17:34.971 08:30:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjYwMjYwMWFjNTRmYjkzMzg3YjNlZWU4NDFkNmZhOGQ0ZjUxNTY4MDNlNWQ1MTA24MzLdw==: 00:17:34.971 08:30:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:17:34.971 08:30:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:34.971 08:30:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:34.971 08:30:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:34.971 08:30:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:34.971 08:30:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:34.971 08:30:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:34.971 08:30:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.971 08:30:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.971 08:30:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.971 08:30:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:34.971 08:30:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:34.971 08:30:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:34.971 08:30:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:34.971 08:30:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:34.971 08:30:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:34.971 08:30:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:34.971 08:30:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:34.971 08:30:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:34.971 08:30:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:34.971 08:30:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:34.971 08:30:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:34.971 08:30:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.971 08:30:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.537 nvme0n1 00:17:35.537 08:30:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.537 08:30:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:35.537 08:30:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.537 08:30:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:35.537 08:30:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.537 08:30:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.796 08:30:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.796 08:30:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:35.796 08:30:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.796 08:30:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.796 08:30:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.796 08:30:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:35.796 08:30:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:17:35.796 08:30:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:35.796 08:30:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:35.796 08:30:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:35.796 08:30:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:35.796 08:30:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDEyOGEzNzlkYjA1ODM0NTY5Mjc0YzNlOWRlNjZiYTNMXQeb: 00:17:35.796 08:30:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGM5ZmE4YTc0Mzk3NzdhNDhlYTgzMGMwNmVkNDI2YTnROFMU: 00:17:35.796 08:30:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:35.796 08:30:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:35.796 08:30:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDEyOGEzNzlkYjA1ODM0NTY5Mjc0YzNlOWRlNjZiYTNMXQeb: 00:17:35.796 08:30:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGM5ZmE4YTc0Mzk3NzdhNDhlYTgzMGMwNmVkNDI2YTnROFMU: ]] 00:17:35.796 08:30:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGM5ZmE4YTc0Mzk3NzdhNDhlYTgzMGMwNmVkNDI2YTnROFMU: 00:17:35.796 08:30:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:17:35.796 08:30:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:35.796 08:30:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:35.796 08:30:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:35.796 08:30:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:35.796 08:30:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:35.796 08:30:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:35.796 08:30:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.796 08:30:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.796 08:30:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.796 08:30:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:35.796 08:30:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:35.796 08:30:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:35.796 08:30:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:35.796 08:30:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:35.796 08:30:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:35.796 08:30:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:35.796 08:30:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:35.796 08:30:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:35.796 08:30:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:35.796 08:30:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:35.796 08:30:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.796 08:30:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.796 08:30:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.362 nvme0n1 00:17:36.362 08:30:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.362 08:30:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:36.362 08:30:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.362 08:30:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.362 08:30:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:36.362 08:30:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.362 08:30:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.362 08:30:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:36.363 08:30:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.363 08:30:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.363 08:30:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.363 08:30:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:36.363 08:30:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:17:36.363 08:30:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:36.363 08:30:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:36.363 08:30:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:36.363 08:30:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:36.363 08:30:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzM2MzBlOTI0MGE4MWRkNWIxNDE3NDU1NzViNWFlNDEzMTFjYTM3YTQ3YTI3ZjI3YBdl8A==: 00:17:36.363 08:30:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjFlYTdiMmFhZGY0YThlYTI2ODU4YjMzYjAxZjg2MmLCuZ92: 00:17:36.363 08:30:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:36.363 08:30:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:36.363 08:30:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzM2MzBlOTI0MGE4MWRkNWIxNDE3NDU1NzViNWFlNDEzMTFjYTM3YTQ3YTI3ZjI3YBdl8A==: 00:17:36.363 08:30:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjFlYTdiMmFhZGY0YThlYTI2ODU4YjMzYjAxZjg2MmLCuZ92: ]] 00:17:36.363 08:30:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjFlYTdiMmFhZGY0YThlYTI2ODU4YjMzYjAxZjg2MmLCuZ92: 00:17:36.363 08:30:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:17:36.363 08:30:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:36.363 08:30:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:36.363 08:30:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:36.363 08:30:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:36.363 08:30:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:36.363 08:30:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:36.363 08:30:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.363 08:30:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.363 08:30:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.363 08:30:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:36.363 08:30:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:36.363 08:30:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:36.363 08:30:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:36.363 08:30:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:36.363 08:30:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:36.363 08:30:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:36.363 08:30:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:36.363 08:30:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:36.363 08:30:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:36.363 08:30:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:36.363 08:30:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:36.363 08:30:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.363 08:30:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.930 nvme0n1 00:17:36.930 08:30:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.930 08:30:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:36.930 08:30:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.930 08:30:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.930 08:30:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:36.930 08:30:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.188 08:30:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.188 08:30:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:37.188 08:30:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.188 08:30:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.188 08:30:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.188 08:30:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:37.188 08:30:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:17:37.188 08:30:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:37.188 08:30:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:37.188 08:30:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:37.189 08:30:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:37.189 08:30:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDA4NjI1MmEzNzZjYWYzNjA3Mzg2NGQ4NGEyOGJiZTc2ODlmNDcwYTBhN2MwMDFhYWRhMzBlNzgzYjIzODVjYlpmpJs=: 00:17:37.189 08:30:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:37.189 08:30:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:37.189 08:30:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:37.189 08:30:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDA4NjI1MmEzNzZjYWYzNjA3Mzg2NGQ4NGEyOGJiZTc2ODlmNDcwYTBhN2MwMDFhYWRhMzBlNzgzYjIzODVjYlpmpJs=: 00:17:37.189 08:30:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:37.189 08:30:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:17:37.189 08:30:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:37.189 08:30:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:37.189 08:30:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:37.189 08:30:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:37.189 08:30:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:37.189 08:30:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:37.189 08:30:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.189 08:30:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.189 08:30:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.189 08:30:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:37.189 08:30:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:37.189 08:30:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:37.189 08:30:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:37.189 08:30:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:37.189 08:30:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:37.189 08:30:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:37.189 08:30:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:37.189 08:30:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:37.189 08:30:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:37.189 08:30:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:37.189 08:30:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:37.189 08:30:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.189 08:30:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.787 nvme0n1 00:17:37.787 08:30:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.787 08:30:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:37.787 08:30:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.787 08:30:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.787 08:30:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:37.788 08:30:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.788 08:30:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.788 08:30:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:37.788 08:30:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.788 08:30:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.788 08:30:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.788 08:30:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:17:37.788 08:30:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:37.788 08:30:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:37.788 08:30:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:17:37.788 08:30:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:37.788 08:30:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:37.788 08:30:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:37.788 08:30:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:37.788 08:30:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTIyNmY4MDQzMWRjOWNhMmUyNmY5NThlMTI4Y2EzZTL0cccQ: 00:17:37.788 08:30:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTU4Zjc1YWQ4OTNhNzc5MDIzMWNhZjJmMWNiM2NkM2RlNjZiZjMzOTJmODVkMjNiNTgwYzg4YTc3YmY2MWYwYuYzm4I=: 00:17:37.788 08:30:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:37.788 08:30:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:37.788 08:30:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTIyNmY4MDQzMWRjOWNhMmUyNmY5NThlMTI4Y2EzZTL0cccQ: 00:17:37.788 08:30:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTU4Zjc1YWQ4OTNhNzc5MDIzMWNhZjJmMWNiM2NkM2RlNjZiZjMzOTJmODVkMjNiNTgwYzg4YTc3YmY2MWYwYuYzm4I=: ]] 00:17:37.788 08:30:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTU4Zjc1YWQ4OTNhNzc5MDIzMWNhZjJmMWNiM2NkM2RlNjZiZjMzOTJmODVkMjNiNTgwYzg4YTc3YmY2MWYwYuYzm4I=: 00:17:37.788 08:30:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:17:37.788 08:30:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:37.788 08:30:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:37.788 08:30:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:37.788 08:30:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:37.788 08:30:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:37.788 08:30:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:37.788 08:30:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.788 08:30:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.788 08:30:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.788 08:30:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:37.788 08:30:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:37.788 08:30:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:37.788 08:30:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:37.788 08:30:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:37.788 08:30:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:37.788 08:30:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:37.788 08:30:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:37.788 08:30:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:37.788 08:30:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:37.788 08:30:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:37.788 08:30:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.788 08:30:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.788 08:30:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.046 nvme0n1 00:17:38.047 08:30:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.047 08:30:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:38.047 08:30:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.047 08:30:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:38.047 08:30:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.047 08:30:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.047 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.047 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:38.047 08:30:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.047 08:30:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.047 08:30:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.047 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:38.047 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:17:38.047 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:38.047 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:38.047 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:38.047 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:38.047 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTU4MjEwNjI4NjIyZGZlM2Y1ZTE0OTRjMTAxMzc2NzkxNTlmYzA5YTdjOTA4Y2VjlefXJQ==: 00:17:38.047 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjYwMjYwMWFjNTRmYjkzMzg3YjNlZWU4NDFkNmZhOGQ0ZjUxNTY4MDNlNWQ1MTA24MzLdw==: 00:17:38.047 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:38.047 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:38.047 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTU4MjEwNjI4NjIyZGZlM2Y1ZTE0OTRjMTAxMzc2NzkxNTlmYzA5YTdjOTA4Y2VjlefXJQ==: 00:17:38.047 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjYwMjYwMWFjNTRmYjkzMzg3YjNlZWU4NDFkNmZhOGQ0ZjUxNTY4MDNlNWQ1MTA24MzLdw==: ]] 00:17:38.047 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjYwMjYwMWFjNTRmYjkzMzg3YjNlZWU4NDFkNmZhOGQ0ZjUxNTY4MDNlNWQ1MTA24MzLdw==: 00:17:38.047 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:17:38.047 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:38.047 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:38.047 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:38.047 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:38.047 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:38.047 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:38.047 08:30:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.047 08:30:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.047 08:30:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.047 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:38.047 08:30:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:38.047 08:30:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:38.047 08:30:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:38.047 08:30:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:38.047 08:30:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:38.047 08:30:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:38.047 08:30:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:38.047 08:30:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:38.047 08:30:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:38.047 08:30:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:38.047 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.047 08:30:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.047 08:30:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.047 nvme0n1 00:17:38.047 08:30:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.047 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:38.047 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:38.047 08:30:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.047 08:30:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.047 08:30:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.047 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.047 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:38.047 08:30:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.047 08:30:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.306 08:30:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.306 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:38.306 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:17:38.306 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:38.306 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:38.306 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:38.306 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:38.306 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDEyOGEzNzlkYjA1ODM0NTY5Mjc0YzNlOWRlNjZiYTNMXQeb: 00:17:38.306 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGM5ZmE4YTc0Mzk3NzdhNDhlYTgzMGMwNmVkNDI2YTnROFMU: 00:17:38.306 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:38.306 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:38.306 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDEyOGEzNzlkYjA1ODM0NTY5Mjc0YzNlOWRlNjZiYTNMXQeb: 00:17:38.306 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGM5ZmE4YTc0Mzk3NzdhNDhlYTgzMGMwNmVkNDI2YTnROFMU: ]] 00:17:38.306 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGM5ZmE4YTc0Mzk3NzdhNDhlYTgzMGMwNmVkNDI2YTnROFMU: 00:17:38.306 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:17:38.306 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:38.306 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:38.306 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:38.306 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:38.306 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:38.306 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:38.306 08:30:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.306 08:30:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.306 08:30:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.306 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:38.306 08:30:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:38.306 08:30:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:38.306 08:30:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:38.306 08:30:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:38.306 08:30:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:38.306 08:30:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:38.306 08:30:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:38.306 08:30:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:38.306 08:30:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:38.306 08:30:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:38.306 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:38.306 08:30:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.306 08:30:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.306 nvme0n1 00:17:38.306 08:30:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.306 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:38.306 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:38.306 08:30:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.306 08:30:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.306 08:30:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.306 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.306 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:38.306 08:30:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.306 08:30:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.306 08:30:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.306 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:38.306 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:17:38.306 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:38.306 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:38.306 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:38.306 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:38.306 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzM2MzBlOTI0MGE4MWRkNWIxNDE3NDU1NzViNWFlNDEzMTFjYTM3YTQ3YTI3ZjI3YBdl8A==: 00:17:38.306 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjFlYTdiMmFhZGY0YThlYTI2ODU4YjMzYjAxZjg2MmLCuZ92: 00:17:38.306 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:38.306 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:38.306 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzM2MzBlOTI0MGE4MWRkNWIxNDE3NDU1NzViNWFlNDEzMTFjYTM3YTQ3YTI3ZjI3YBdl8A==: 00:17:38.306 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjFlYTdiMmFhZGY0YThlYTI2ODU4YjMzYjAxZjg2MmLCuZ92: ]] 00:17:38.306 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjFlYTdiMmFhZGY0YThlYTI2ODU4YjMzYjAxZjg2MmLCuZ92: 00:17:38.306 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:17:38.306 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:38.306 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:38.306 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:38.306 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:38.306 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:38.306 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:38.306 08:30:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.306 08:30:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.306 08:30:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.306 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:38.306 08:30:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:38.306 08:30:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:38.306 08:30:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:38.306 08:30:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:38.306 08:30:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:38.306 08:30:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:38.306 08:30:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:38.306 08:30:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:38.306 08:30:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:38.306 08:30:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:38.306 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:38.306 08:30:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.306 08:30:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.564 nvme0n1 00:17:38.564 08:30:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.564 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:38.564 08:30:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.564 08:30:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.564 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:38.564 08:30:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.564 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.564 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:38.564 08:30:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.564 08:30:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.564 08:30:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.564 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:38.564 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:17:38.564 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:38.564 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:38.564 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:38.564 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:38.564 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDA4NjI1MmEzNzZjYWYzNjA3Mzg2NGQ4NGEyOGJiZTc2ODlmNDcwYTBhN2MwMDFhYWRhMzBlNzgzYjIzODVjYlpmpJs=: 00:17:38.564 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:38.564 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:38.564 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:38.564 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDA4NjI1MmEzNzZjYWYzNjA3Mzg2NGQ4NGEyOGJiZTc2ODlmNDcwYTBhN2MwMDFhYWRhMzBlNzgzYjIzODVjYlpmpJs=: 00:17:38.564 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:38.564 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:17:38.564 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:38.564 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:38.564 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:38.564 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:38.564 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:38.564 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:38.564 08:30:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.564 08:30:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.564 08:30:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.564 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:38.564 08:30:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:38.564 08:30:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:38.564 08:30:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:38.564 08:30:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:38.564 08:30:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:38.564 08:30:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:38.564 08:30:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:38.564 08:30:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:38.564 08:30:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:38.564 08:30:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:38.564 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:38.564 08:30:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.564 08:30:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.564 nvme0n1 00:17:38.564 08:30:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.565 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:38.565 08:30:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.565 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:38.565 08:30:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.823 08:30:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.823 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.823 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:38.823 08:30:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.823 08:30:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.823 08:30:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.823 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:38.823 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:38.823 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:17:38.823 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:38.823 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:38.823 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:38.823 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:38.823 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTIyNmY4MDQzMWRjOWNhMmUyNmY5NThlMTI4Y2EzZTL0cccQ: 00:17:38.823 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTU4Zjc1YWQ4OTNhNzc5MDIzMWNhZjJmMWNiM2NkM2RlNjZiZjMzOTJmODVkMjNiNTgwYzg4YTc3YmY2MWYwYuYzm4I=: 00:17:38.823 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:38.823 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:38.823 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTIyNmY4MDQzMWRjOWNhMmUyNmY5NThlMTI4Y2EzZTL0cccQ: 00:17:38.823 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTU4Zjc1YWQ4OTNhNzc5MDIzMWNhZjJmMWNiM2NkM2RlNjZiZjMzOTJmODVkMjNiNTgwYzg4YTc3YmY2MWYwYuYzm4I=: ]] 00:17:38.823 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTU4Zjc1YWQ4OTNhNzc5MDIzMWNhZjJmMWNiM2NkM2RlNjZiZjMzOTJmODVkMjNiNTgwYzg4YTc3YmY2MWYwYuYzm4I=: 00:17:38.823 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:17:38.823 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:38.823 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:38.823 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:38.823 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:38.823 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:38.823 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:38.823 08:30:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.823 08:30:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.823 08:30:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.823 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:38.823 08:30:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:38.823 08:30:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:38.823 08:30:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:38.823 08:30:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:38.823 08:30:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:38.823 08:30:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:38.823 08:30:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:38.823 08:30:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:38.823 08:30:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:38.823 08:30:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:38.823 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:38.823 08:30:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.823 08:30:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.823 nvme0n1 00:17:38.823 08:30:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.823 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:38.823 08:30:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.823 08:30:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:38.823 08:30:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.823 08:30:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.081 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.081 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:39.081 08:30:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.081 08:30:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.081 08:30:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.081 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:39.081 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:17:39.081 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:39.081 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:39.081 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:39.081 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:39.081 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTU4MjEwNjI4NjIyZGZlM2Y1ZTE0OTRjMTAxMzc2NzkxNTlmYzA5YTdjOTA4Y2VjlefXJQ==: 00:17:39.081 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjYwMjYwMWFjNTRmYjkzMzg3YjNlZWU4NDFkNmZhOGQ0ZjUxNTY4MDNlNWQ1MTA24MzLdw==: 00:17:39.081 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:39.081 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:39.081 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTU4MjEwNjI4NjIyZGZlM2Y1ZTE0OTRjMTAxMzc2NzkxNTlmYzA5YTdjOTA4Y2VjlefXJQ==: 00:17:39.081 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjYwMjYwMWFjNTRmYjkzMzg3YjNlZWU4NDFkNmZhOGQ0ZjUxNTY4MDNlNWQ1MTA24MzLdw==: ]] 00:17:39.081 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjYwMjYwMWFjNTRmYjkzMzg3YjNlZWU4NDFkNmZhOGQ0ZjUxNTY4MDNlNWQ1MTA24MzLdw==: 00:17:39.081 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:17:39.081 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:39.081 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:39.081 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:39.081 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:39.081 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:39.081 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:39.081 08:30:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.081 08:30:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.081 08:30:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.081 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:39.081 08:30:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:39.081 08:30:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:39.081 08:30:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:39.081 08:30:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:39.081 08:30:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:39.081 08:30:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:39.081 08:30:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:39.081 08:30:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:39.081 08:30:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:39.081 08:30:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:39.081 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.081 08:30:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.081 08:30:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.081 nvme0n1 00:17:39.081 08:30:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.081 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:39.081 08:30:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.081 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:39.081 08:30:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.081 08:30:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.081 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.081 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:39.081 08:30:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.081 08:30:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.081 08:30:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.081 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:39.081 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:17:39.081 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:39.081 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:39.081 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:39.081 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:39.081 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDEyOGEzNzlkYjA1ODM0NTY5Mjc0YzNlOWRlNjZiYTNMXQeb: 00:17:39.081 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGM5ZmE4YTc0Mzk3NzdhNDhlYTgzMGMwNmVkNDI2YTnROFMU: 00:17:39.081 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:39.081 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:39.081 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDEyOGEzNzlkYjA1ODM0NTY5Mjc0YzNlOWRlNjZiYTNMXQeb: 00:17:39.082 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGM5ZmE4YTc0Mzk3NzdhNDhlYTgzMGMwNmVkNDI2YTnROFMU: ]] 00:17:39.082 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGM5ZmE4YTc0Mzk3NzdhNDhlYTgzMGMwNmVkNDI2YTnROFMU: 00:17:39.082 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:17:39.082 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:39.082 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:39.341 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:39.341 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:39.341 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:39.341 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:39.341 08:30:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.341 08:30:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.341 08:30:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.341 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:39.341 08:30:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:39.341 08:30:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:39.341 08:30:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:39.341 08:30:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:39.341 08:30:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:39.341 08:30:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:39.341 08:30:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:39.341 08:30:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:39.341 08:30:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:39.341 08:30:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:39.341 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:39.341 08:30:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.341 08:30:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.341 nvme0n1 00:17:39.341 08:30:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.341 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:39.341 08:30:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.341 08:30:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.341 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:39.341 08:30:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.341 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.341 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:39.341 08:30:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.341 08:30:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.341 08:30:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.341 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:39.341 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:17:39.341 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:39.341 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:39.341 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:39.341 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:39.341 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzM2MzBlOTI0MGE4MWRkNWIxNDE3NDU1NzViNWFlNDEzMTFjYTM3YTQ3YTI3ZjI3YBdl8A==: 00:17:39.341 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjFlYTdiMmFhZGY0YThlYTI2ODU4YjMzYjAxZjg2MmLCuZ92: 00:17:39.341 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:39.341 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:39.341 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzM2MzBlOTI0MGE4MWRkNWIxNDE3NDU1NzViNWFlNDEzMTFjYTM3YTQ3YTI3ZjI3YBdl8A==: 00:17:39.341 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjFlYTdiMmFhZGY0YThlYTI2ODU4YjMzYjAxZjg2MmLCuZ92: ]] 00:17:39.341 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjFlYTdiMmFhZGY0YThlYTI2ODU4YjMzYjAxZjg2MmLCuZ92: 00:17:39.341 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:17:39.341 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:39.341 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:39.341 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:39.341 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:39.341 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:39.341 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:39.341 08:30:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.341 08:30:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.341 08:30:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.341 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:39.341 08:30:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:39.341 08:30:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:39.341 08:30:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:39.341 08:30:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:39.341 08:30:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:39.341 08:30:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:39.341 08:30:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:39.341 08:30:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:39.341 08:30:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:39.341 08:30:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:39.341 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:39.341 08:30:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.341 08:30:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.599 nvme0n1 00:17:39.599 08:30:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.599 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:39.599 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:39.599 08:30:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.599 08:30:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.599 08:30:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.599 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.599 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:39.599 08:30:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.599 08:30:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.599 08:30:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.599 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:39.599 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:17:39.599 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:39.599 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:39.599 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:39.599 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:39.599 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDA4NjI1MmEzNzZjYWYzNjA3Mzg2NGQ4NGEyOGJiZTc2ODlmNDcwYTBhN2MwMDFhYWRhMzBlNzgzYjIzODVjYlpmpJs=: 00:17:39.599 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:39.599 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:39.599 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:39.599 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDA4NjI1MmEzNzZjYWYzNjA3Mzg2NGQ4NGEyOGJiZTc2ODlmNDcwYTBhN2MwMDFhYWRhMzBlNzgzYjIzODVjYlpmpJs=: 00:17:39.599 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:39.599 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:17:39.599 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:39.599 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:39.599 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:39.599 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:39.599 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:39.599 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:39.599 08:30:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.599 08:30:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.599 08:30:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.599 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:39.599 08:30:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:39.599 08:30:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:39.599 08:30:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:39.599 08:30:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:39.599 08:30:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:39.599 08:30:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:39.599 08:30:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:39.599 08:30:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:39.599 08:30:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:39.599 08:30:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:39.599 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:39.599 08:30:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.599 08:30:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.858 nvme0n1 00:17:39.858 08:30:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.858 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:39.858 08:30:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.858 08:30:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.858 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:39.858 08:30:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.858 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.858 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:39.858 08:30:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.858 08:30:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.858 08:30:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.858 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:39.858 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:39.858 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:17:39.858 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:39.858 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:39.858 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:39.858 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:39.858 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTIyNmY4MDQzMWRjOWNhMmUyNmY5NThlMTI4Y2EzZTL0cccQ: 00:17:39.858 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTU4Zjc1YWQ4OTNhNzc5MDIzMWNhZjJmMWNiM2NkM2RlNjZiZjMzOTJmODVkMjNiNTgwYzg4YTc3YmY2MWYwYuYzm4I=: 00:17:39.858 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:39.858 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:39.858 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTIyNmY4MDQzMWRjOWNhMmUyNmY5NThlMTI4Y2EzZTL0cccQ: 00:17:39.858 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTU4Zjc1YWQ4OTNhNzc5MDIzMWNhZjJmMWNiM2NkM2RlNjZiZjMzOTJmODVkMjNiNTgwYzg4YTc3YmY2MWYwYuYzm4I=: ]] 00:17:39.858 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTU4Zjc1YWQ4OTNhNzc5MDIzMWNhZjJmMWNiM2NkM2RlNjZiZjMzOTJmODVkMjNiNTgwYzg4YTc3YmY2MWYwYuYzm4I=: 00:17:39.858 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:17:39.858 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:39.858 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:39.858 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:39.858 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:39.858 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:39.858 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:39.858 08:30:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.858 08:30:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.858 08:30:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.858 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:39.858 08:30:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:39.858 08:30:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:39.858 08:30:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:39.858 08:30:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:39.858 08:30:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:39.858 08:30:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:39.858 08:30:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:39.858 08:30:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:39.858 08:30:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:39.858 08:30:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:39.858 08:30:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:39.858 08:30:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.858 08:30:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.117 nvme0n1 00:17:40.117 08:30:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.117 08:30:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:40.117 08:30:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:40.117 08:30:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.117 08:30:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.117 08:30:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.117 08:30:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.117 08:30:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:40.117 08:30:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.117 08:30:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.117 08:30:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.117 08:30:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:40.117 08:30:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:17:40.117 08:30:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:40.117 08:30:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:40.117 08:30:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:40.117 08:30:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:40.117 08:30:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTU4MjEwNjI4NjIyZGZlM2Y1ZTE0OTRjMTAxMzc2NzkxNTlmYzA5YTdjOTA4Y2VjlefXJQ==: 00:17:40.117 08:30:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjYwMjYwMWFjNTRmYjkzMzg3YjNlZWU4NDFkNmZhOGQ0ZjUxNTY4MDNlNWQ1MTA24MzLdw==: 00:17:40.117 08:30:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:40.117 08:30:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:40.117 08:30:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTU4MjEwNjI4NjIyZGZlM2Y1ZTE0OTRjMTAxMzc2NzkxNTlmYzA5YTdjOTA4Y2VjlefXJQ==: 00:17:40.117 08:30:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjYwMjYwMWFjNTRmYjkzMzg3YjNlZWU4NDFkNmZhOGQ0ZjUxNTY4MDNlNWQ1MTA24MzLdw==: ]] 00:17:40.117 08:30:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjYwMjYwMWFjNTRmYjkzMzg3YjNlZWU4NDFkNmZhOGQ0ZjUxNTY4MDNlNWQ1MTA24MzLdw==: 00:17:40.117 08:30:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:17:40.117 08:30:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:40.117 08:30:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:40.117 08:30:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:40.117 08:30:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:40.117 08:30:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:40.117 08:30:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:40.117 08:30:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.117 08:30:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.117 08:30:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.117 08:30:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:40.117 08:30:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:40.117 08:30:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:40.117 08:30:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:40.117 08:30:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:40.117 08:30:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:40.117 08:30:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:40.117 08:30:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:40.117 08:30:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:40.117 08:30:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:40.117 08:30:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:40.117 08:30:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.117 08:30:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.117 08:30:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.376 nvme0n1 00:17:40.376 08:30:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.376 08:30:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:40.376 08:30:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.376 08:30:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:40.376 08:30:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.376 08:30:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.376 08:30:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.376 08:30:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:40.376 08:30:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.376 08:30:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.376 08:30:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.376 08:30:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:40.376 08:30:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:17:40.376 08:30:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:40.376 08:30:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:40.376 08:30:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:40.376 08:30:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:40.376 08:30:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDEyOGEzNzlkYjA1ODM0NTY5Mjc0YzNlOWRlNjZiYTNMXQeb: 00:17:40.376 08:30:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGM5ZmE4YTc0Mzk3NzdhNDhlYTgzMGMwNmVkNDI2YTnROFMU: 00:17:40.376 08:30:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:40.376 08:30:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:40.376 08:30:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDEyOGEzNzlkYjA1ODM0NTY5Mjc0YzNlOWRlNjZiYTNMXQeb: 00:17:40.376 08:30:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGM5ZmE4YTc0Mzk3NzdhNDhlYTgzMGMwNmVkNDI2YTnROFMU: ]] 00:17:40.376 08:30:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGM5ZmE4YTc0Mzk3NzdhNDhlYTgzMGMwNmVkNDI2YTnROFMU: 00:17:40.376 08:30:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:17:40.376 08:30:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:40.376 08:30:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:40.376 08:30:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:40.376 08:30:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:40.376 08:30:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:40.376 08:30:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:40.376 08:30:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.376 08:30:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.376 08:30:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.376 08:30:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:40.376 08:30:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:40.376 08:30:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:40.376 08:30:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:40.376 08:30:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:40.376 08:30:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:40.376 08:30:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:40.376 08:30:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:40.376 08:30:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:40.376 08:30:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:40.376 08:30:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:40.376 08:30:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.376 08:30:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.376 08:30:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.635 nvme0n1 00:17:40.635 08:30:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.635 08:30:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:40.635 08:30:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.635 08:30:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.635 08:30:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:40.635 08:30:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.635 08:30:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.635 08:30:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:40.635 08:30:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.635 08:30:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.635 08:30:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.635 08:30:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:40.635 08:30:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:17:40.635 08:30:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:40.635 08:30:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:40.635 08:30:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:40.635 08:30:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:40.635 08:30:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzM2MzBlOTI0MGE4MWRkNWIxNDE3NDU1NzViNWFlNDEzMTFjYTM3YTQ3YTI3ZjI3YBdl8A==: 00:17:40.635 08:30:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjFlYTdiMmFhZGY0YThlYTI2ODU4YjMzYjAxZjg2MmLCuZ92: 00:17:40.635 08:30:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:40.635 08:30:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:40.635 08:30:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzM2MzBlOTI0MGE4MWRkNWIxNDE3NDU1NzViNWFlNDEzMTFjYTM3YTQ3YTI3ZjI3YBdl8A==: 00:17:40.635 08:30:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjFlYTdiMmFhZGY0YThlYTI2ODU4YjMzYjAxZjg2MmLCuZ92: ]] 00:17:40.635 08:30:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjFlYTdiMmFhZGY0YThlYTI2ODU4YjMzYjAxZjg2MmLCuZ92: 00:17:40.635 08:30:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:17:40.635 08:30:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:40.635 08:30:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:40.635 08:30:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:40.635 08:30:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:40.635 08:30:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:40.635 08:30:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:40.635 08:30:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.635 08:30:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.635 08:30:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.635 08:30:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:40.635 08:30:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:40.635 08:30:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:40.635 08:30:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:40.635 08:30:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:40.635 08:30:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:40.635 08:30:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:40.635 08:30:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:40.635 08:30:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:40.635 08:30:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:40.635 08:30:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:40.635 08:30:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:40.635 08:30:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.635 08:30:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.895 nvme0n1 00:17:40.895 08:30:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.895 08:30:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:40.895 08:30:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.895 08:30:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.895 08:30:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:40.895 08:30:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.895 08:30:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.895 08:30:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:40.895 08:30:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.895 08:30:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.895 08:30:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.895 08:30:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:40.895 08:30:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:17:40.895 08:30:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:40.895 08:30:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:40.895 08:30:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:40.895 08:30:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:40.895 08:30:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDA4NjI1MmEzNzZjYWYzNjA3Mzg2NGQ4NGEyOGJiZTc2ODlmNDcwYTBhN2MwMDFhYWRhMzBlNzgzYjIzODVjYlpmpJs=: 00:17:40.895 08:30:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:40.895 08:30:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:40.895 08:30:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:40.895 08:30:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDA4NjI1MmEzNzZjYWYzNjA3Mzg2NGQ4NGEyOGJiZTc2ODlmNDcwYTBhN2MwMDFhYWRhMzBlNzgzYjIzODVjYlpmpJs=: 00:17:40.895 08:30:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:40.895 08:30:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:17:40.895 08:30:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:40.895 08:30:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:40.895 08:30:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:40.895 08:30:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:40.895 08:30:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:40.895 08:30:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:40.895 08:30:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.895 08:30:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.895 08:30:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.895 08:30:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:40.895 08:30:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:40.895 08:30:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:40.895 08:30:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:40.895 08:30:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:40.895 08:30:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:40.895 08:30:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:40.895 08:30:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:40.895 08:30:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:40.895 08:30:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:40.895 08:30:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:40.895 08:30:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:40.895 08:30:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.895 08:30:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.226 nvme0n1 00:17:41.226 08:30:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.226 08:30:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:41.226 08:30:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:41.226 08:30:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.226 08:30:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.226 08:30:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.226 08:30:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.226 08:30:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:41.226 08:30:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.226 08:30:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.226 08:30:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.226 08:30:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:41.226 08:30:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:41.226 08:30:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:17:41.226 08:30:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:41.226 08:30:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:41.226 08:30:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:41.226 08:30:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:41.226 08:30:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTIyNmY4MDQzMWRjOWNhMmUyNmY5NThlMTI4Y2EzZTL0cccQ: 00:17:41.226 08:30:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTU4Zjc1YWQ4OTNhNzc5MDIzMWNhZjJmMWNiM2NkM2RlNjZiZjMzOTJmODVkMjNiNTgwYzg4YTc3YmY2MWYwYuYzm4I=: 00:17:41.226 08:30:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:41.226 08:30:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:41.226 08:30:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTIyNmY4MDQzMWRjOWNhMmUyNmY5NThlMTI4Y2EzZTL0cccQ: 00:17:41.226 08:30:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTU4Zjc1YWQ4OTNhNzc5MDIzMWNhZjJmMWNiM2NkM2RlNjZiZjMzOTJmODVkMjNiNTgwYzg4YTc3YmY2MWYwYuYzm4I=: ]] 00:17:41.226 08:30:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTU4Zjc1YWQ4OTNhNzc5MDIzMWNhZjJmMWNiM2NkM2RlNjZiZjMzOTJmODVkMjNiNTgwYzg4YTc3YmY2MWYwYuYzm4I=: 00:17:41.226 08:30:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:17:41.226 08:30:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:41.226 08:30:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:41.226 08:30:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:41.226 08:30:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:41.226 08:30:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:41.226 08:30:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:41.226 08:30:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.226 08:30:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.226 08:30:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.226 08:30:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:41.226 08:30:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:41.226 08:30:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:41.226 08:30:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:41.226 08:30:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:41.226 08:30:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:41.226 08:30:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:41.226 08:30:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:41.226 08:30:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:41.226 08:30:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:41.226 08:30:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:41.226 08:30:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:41.226 08:30:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.226 08:30:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.504 nvme0n1 00:17:41.504 08:30:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.504 08:30:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:41.504 08:30:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:41.504 08:30:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.504 08:30:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.762 08:30:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.762 08:30:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.762 08:30:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:41.762 08:30:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.762 08:30:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.762 08:30:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.762 08:30:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:41.762 08:30:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:17:41.762 08:30:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:41.762 08:30:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:41.762 08:30:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:41.762 08:30:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:41.762 08:30:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTU4MjEwNjI4NjIyZGZlM2Y1ZTE0OTRjMTAxMzc2NzkxNTlmYzA5YTdjOTA4Y2VjlefXJQ==: 00:17:41.762 08:30:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjYwMjYwMWFjNTRmYjkzMzg3YjNlZWU4NDFkNmZhOGQ0ZjUxNTY4MDNlNWQ1MTA24MzLdw==: 00:17:41.762 08:30:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:41.762 08:30:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:41.762 08:30:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTU4MjEwNjI4NjIyZGZlM2Y1ZTE0OTRjMTAxMzc2NzkxNTlmYzA5YTdjOTA4Y2VjlefXJQ==: 00:17:41.762 08:30:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjYwMjYwMWFjNTRmYjkzMzg3YjNlZWU4NDFkNmZhOGQ0ZjUxNTY4MDNlNWQ1MTA24MzLdw==: ]] 00:17:41.762 08:30:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjYwMjYwMWFjNTRmYjkzMzg3YjNlZWU4NDFkNmZhOGQ0ZjUxNTY4MDNlNWQ1MTA24MzLdw==: 00:17:41.762 08:30:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:17:41.762 08:30:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:41.762 08:30:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:41.762 08:30:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:41.762 08:30:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:41.762 08:30:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:41.762 08:30:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:41.762 08:30:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.762 08:30:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.762 08:30:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.762 08:30:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:41.762 08:30:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:41.762 08:30:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:41.762 08:30:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:41.762 08:30:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:41.762 08:30:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:41.762 08:30:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:41.762 08:30:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:41.762 08:30:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:41.762 08:30:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:41.762 08:30:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:41.762 08:30:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:41.762 08:30:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.762 08:30:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.021 nvme0n1 00:17:42.021 08:30:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.021 08:30:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:42.021 08:30:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.021 08:30:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.021 08:30:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:42.021 08:30:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.021 08:30:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.021 08:30:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:42.021 08:30:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.021 08:30:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.021 08:30:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.021 08:30:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:42.021 08:30:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:17:42.021 08:30:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:42.021 08:30:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:42.021 08:30:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:42.021 08:30:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:42.021 08:30:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDEyOGEzNzlkYjA1ODM0NTY5Mjc0YzNlOWRlNjZiYTNMXQeb: 00:17:42.021 08:30:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGM5ZmE4YTc0Mzk3NzdhNDhlYTgzMGMwNmVkNDI2YTnROFMU: 00:17:42.021 08:30:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:42.021 08:30:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:42.021 08:30:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDEyOGEzNzlkYjA1ODM0NTY5Mjc0YzNlOWRlNjZiYTNMXQeb: 00:17:42.021 08:30:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGM5ZmE4YTc0Mzk3NzdhNDhlYTgzMGMwNmVkNDI2YTnROFMU: ]] 00:17:42.021 08:30:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGM5ZmE4YTc0Mzk3NzdhNDhlYTgzMGMwNmVkNDI2YTnROFMU: 00:17:42.021 08:30:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:17:42.021 08:30:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:42.021 08:30:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:42.021 08:30:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:42.021 08:30:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:42.021 08:30:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:42.021 08:30:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:42.021 08:30:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.021 08:30:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.021 08:30:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.281 08:30:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:42.281 08:30:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:42.281 08:30:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:42.281 08:30:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:42.281 08:30:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:42.281 08:30:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:42.281 08:30:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:42.281 08:30:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:42.281 08:30:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:42.281 08:30:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:42.281 08:30:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:42.281 08:30:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.281 08:30:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.281 08:30:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.542 nvme0n1 00:17:42.542 08:30:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.542 08:30:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:42.542 08:30:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.542 08:30:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.542 08:30:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:42.542 08:30:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.542 08:30:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.542 08:30:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:42.542 08:30:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.542 08:30:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.542 08:30:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.542 08:30:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:42.542 08:30:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:17:42.542 08:30:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:42.542 08:30:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:42.542 08:30:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:42.542 08:30:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:42.542 08:30:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzM2MzBlOTI0MGE4MWRkNWIxNDE3NDU1NzViNWFlNDEzMTFjYTM3YTQ3YTI3ZjI3YBdl8A==: 00:17:42.542 08:30:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjFlYTdiMmFhZGY0YThlYTI2ODU4YjMzYjAxZjg2MmLCuZ92: 00:17:42.542 08:30:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:42.542 08:30:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:42.542 08:30:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzM2MzBlOTI0MGE4MWRkNWIxNDE3NDU1NzViNWFlNDEzMTFjYTM3YTQ3YTI3ZjI3YBdl8A==: 00:17:42.542 08:30:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjFlYTdiMmFhZGY0YThlYTI2ODU4YjMzYjAxZjg2MmLCuZ92: ]] 00:17:42.542 08:30:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjFlYTdiMmFhZGY0YThlYTI2ODU4YjMzYjAxZjg2MmLCuZ92: 00:17:42.542 08:30:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:17:42.542 08:30:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:42.542 08:30:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:42.542 08:30:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:42.542 08:30:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:42.542 08:30:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:42.542 08:30:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:42.542 08:30:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.542 08:30:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.542 08:30:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.542 08:30:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:42.542 08:30:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:42.542 08:30:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:42.542 08:30:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:42.542 08:30:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:42.542 08:30:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:42.542 08:30:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:42.542 08:30:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:42.542 08:30:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:42.542 08:30:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:42.542 08:30:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:42.543 08:30:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:42.543 08:30:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.543 08:30:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.116 nvme0n1 00:17:43.116 08:30:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.117 08:30:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:43.117 08:30:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.117 08:30:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:43.117 08:30:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.117 08:30:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.117 08:30:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.117 08:30:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:43.117 08:30:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.117 08:30:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.117 08:30:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.117 08:30:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:43.117 08:30:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:17:43.117 08:30:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:43.117 08:30:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:43.117 08:30:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:43.117 08:30:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:43.117 08:30:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDA4NjI1MmEzNzZjYWYzNjA3Mzg2NGQ4NGEyOGJiZTc2ODlmNDcwYTBhN2MwMDFhYWRhMzBlNzgzYjIzODVjYlpmpJs=: 00:17:43.117 08:30:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:43.117 08:30:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:43.117 08:30:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:43.117 08:30:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDA4NjI1MmEzNzZjYWYzNjA3Mzg2NGQ4NGEyOGJiZTc2ODlmNDcwYTBhN2MwMDFhYWRhMzBlNzgzYjIzODVjYlpmpJs=: 00:17:43.117 08:30:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:43.117 08:30:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:17:43.117 08:30:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:43.117 08:30:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:43.117 08:30:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:43.117 08:30:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:43.117 08:30:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:43.117 08:30:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:43.117 08:30:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.117 08:30:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.117 08:30:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.117 08:30:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:43.117 08:30:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:43.117 08:30:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:43.117 08:30:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:43.117 08:30:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:43.117 08:30:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:43.117 08:30:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:43.117 08:30:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:43.117 08:30:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:43.117 08:30:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:43.117 08:30:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:43.117 08:30:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:43.117 08:30:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.117 08:30:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.379 nvme0n1 00:17:43.379 08:30:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.379 08:30:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:43.379 08:30:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.379 08:30:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:43.379 08:30:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.379 08:30:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.379 08:30:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.379 08:30:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:43.379 08:30:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.379 08:30:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.379 08:30:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.379 08:30:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:43.379 08:30:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:43.379 08:30:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:17:43.379 08:30:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:43.379 08:30:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:43.379 08:30:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:43.379 08:30:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:43.379 08:30:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTIyNmY4MDQzMWRjOWNhMmUyNmY5NThlMTI4Y2EzZTL0cccQ: 00:17:43.379 08:30:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTU4Zjc1YWQ4OTNhNzc5MDIzMWNhZjJmMWNiM2NkM2RlNjZiZjMzOTJmODVkMjNiNTgwYzg4YTc3YmY2MWYwYuYzm4I=: 00:17:43.379 08:30:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:43.379 08:30:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:43.379 08:30:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTIyNmY4MDQzMWRjOWNhMmUyNmY5NThlMTI4Y2EzZTL0cccQ: 00:17:43.379 08:30:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTU4Zjc1YWQ4OTNhNzc5MDIzMWNhZjJmMWNiM2NkM2RlNjZiZjMzOTJmODVkMjNiNTgwYzg4YTc3YmY2MWYwYuYzm4I=: ]] 00:17:43.379 08:30:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTU4Zjc1YWQ4OTNhNzc5MDIzMWNhZjJmMWNiM2NkM2RlNjZiZjMzOTJmODVkMjNiNTgwYzg4YTc3YmY2MWYwYuYzm4I=: 00:17:43.379 08:30:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:17:43.379 08:30:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:43.379 08:30:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:43.379 08:30:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:43.379 08:30:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:43.379 08:30:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:43.379 08:30:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:43.379 08:30:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.379 08:30:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.379 08:30:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.379 08:30:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:43.379 08:30:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:43.379 08:30:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:43.379 08:30:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:43.379 08:30:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:43.379 08:30:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:43.379 08:30:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:43.379 08:30:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:43.379 08:30:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:43.379 08:30:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:43.379 08:30:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:43.379 08:30:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:43.379 08:30:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.379 08:30:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:44.338 nvme0n1 00:17:44.338 08:30:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.338 08:30:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:44.338 08:30:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:44.338 08:30:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.338 08:30:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:44.338 08:30:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.338 08:30:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.338 08:30:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:44.338 08:30:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.338 08:30:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:44.338 08:30:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.338 08:30:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:44.338 08:30:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:17:44.338 08:30:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:44.338 08:30:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:44.338 08:30:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:44.338 08:30:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:44.338 08:30:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTU4MjEwNjI4NjIyZGZlM2Y1ZTE0OTRjMTAxMzc2NzkxNTlmYzA5YTdjOTA4Y2VjlefXJQ==: 00:17:44.338 08:30:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjYwMjYwMWFjNTRmYjkzMzg3YjNlZWU4NDFkNmZhOGQ0ZjUxNTY4MDNlNWQ1MTA24MzLdw==: 00:17:44.338 08:30:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:44.338 08:30:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:44.338 08:30:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTU4MjEwNjI4NjIyZGZlM2Y1ZTE0OTRjMTAxMzc2NzkxNTlmYzA5YTdjOTA4Y2VjlefXJQ==: 00:17:44.338 08:30:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjYwMjYwMWFjNTRmYjkzMzg3YjNlZWU4NDFkNmZhOGQ0ZjUxNTY4MDNlNWQ1MTA24MzLdw==: ]] 00:17:44.338 08:30:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjYwMjYwMWFjNTRmYjkzMzg3YjNlZWU4NDFkNmZhOGQ0ZjUxNTY4MDNlNWQ1MTA24MzLdw==: 00:17:44.338 08:30:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:17:44.338 08:30:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:44.338 08:30:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:44.338 08:30:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:44.338 08:30:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:44.338 08:30:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:44.338 08:30:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:44.338 08:30:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.338 08:30:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:44.338 08:30:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.338 08:30:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:44.338 08:30:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:44.338 08:30:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:44.338 08:30:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:44.338 08:30:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:44.338 08:30:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:44.338 08:30:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:44.338 08:30:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:44.338 08:30:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:44.338 08:30:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:44.338 08:30:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:44.338 08:30:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:44.338 08:30:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.338 08:30:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:44.910 nvme0n1 00:17:44.910 08:30:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.910 08:30:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:44.910 08:30:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:44.910 08:30:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.910 08:30:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:44.910 08:30:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.910 08:30:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.910 08:30:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:44.910 08:30:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.910 08:30:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:44.910 08:30:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.910 08:30:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:44.910 08:30:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:17:44.910 08:30:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:44.910 08:30:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:44.910 08:30:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:44.910 08:30:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:44.910 08:30:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDEyOGEzNzlkYjA1ODM0NTY5Mjc0YzNlOWRlNjZiYTNMXQeb: 00:17:44.910 08:30:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGM5ZmE4YTc0Mzk3NzdhNDhlYTgzMGMwNmVkNDI2YTnROFMU: 00:17:44.910 08:30:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:44.910 08:30:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:44.910 08:30:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDEyOGEzNzlkYjA1ODM0NTY5Mjc0YzNlOWRlNjZiYTNMXQeb: 00:17:44.910 08:30:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGM5ZmE4YTc0Mzk3NzdhNDhlYTgzMGMwNmVkNDI2YTnROFMU: ]] 00:17:44.910 08:30:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGM5ZmE4YTc0Mzk3NzdhNDhlYTgzMGMwNmVkNDI2YTnROFMU: 00:17:44.910 08:30:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:17:44.910 08:30:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:44.910 08:30:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:44.910 08:30:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:44.910 08:30:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:44.910 08:30:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:44.910 08:30:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:44.910 08:30:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.910 08:30:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:44.910 08:30:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.910 08:30:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:44.910 08:30:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:44.910 08:30:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:44.910 08:30:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:44.910 08:30:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:44.910 08:30:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:44.910 08:30:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:44.910 08:30:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:44.910 08:30:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:44.910 08:30:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:44.910 08:30:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:44.910 08:30:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:44.910 08:30:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.910 08:30:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.478 nvme0n1 00:17:45.478 08:30:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.478 08:30:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:45.478 08:30:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.478 08:30:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.478 08:30:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:45.478 08:30:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.478 08:30:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.478 08:30:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:45.478 08:30:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.478 08:30:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.478 08:30:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.478 08:30:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:45.478 08:30:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:17:45.478 08:30:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:45.478 08:30:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:45.478 08:30:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:45.478 08:30:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:45.478 08:30:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzM2MzBlOTI0MGE4MWRkNWIxNDE3NDU1NzViNWFlNDEzMTFjYTM3YTQ3YTI3ZjI3YBdl8A==: 00:17:45.478 08:30:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjFlYTdiMmFhZGY0YThlYTI2ODU4YjMzYjAxZjg2MmLCuZ92: 00:17:45.478 08:30:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:45.478 08:30:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:45.478 08:30:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzM2MzBlOTI0MGE4MWRkNWIxNDE3NDU1NzViNWFlNDEzMTFjYTM3YTQ3YTI3ZjI3YBdl8A==: 00:17:45.478 08:30:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjFlYTdiMmFhZGY0YThlYTI2ODU4YjMzYjAxZjg2MmLCuZ92: ]] 00:17:45.478 08:30:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjFlYTdiMmFhZGY0YThlYTI2ODU4YjMzYjAxZjg2MmLCuZ92: 00:17:45.478 08:30:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:17:45.478 08:30:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:45.478 08:30:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:45.478 08:30:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:45.478 08:30:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:45.478 08:30:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:45.478 08:30:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:45.478 08:30:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.478 08:30:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.736 08:30:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.736 08:30:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:45.736 08:30:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:45.736 08:30:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:45.736 08:30:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:45.736 08:30:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:45.736 08:30:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:45.736 08:30:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:45.736 08:30:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:45.737 08:30:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:45.737 08:30:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:45.737 08:30:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:45.737 08:30:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:45.737 08:30:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.737 08:30:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.305 nvme0n1 00:17:46.305 08:30:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.305 08:30:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:46.305 08:30:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:46.305 08:30:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.305 08:30:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.305 08:30:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.305 08:30:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.305 08:30:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:46.305 08:30:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.305 08:30:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.305 08:30:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.305 08:30:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:46.305 08:30:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:17:46.305 08:30:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:46.305 08:30:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:46.305 08:30:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:46.305 08:30:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:46.305 08:30:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDA4NjI1MmEzNzZjYWYzNjA3Mzg2NGQ4NGEyOGJiZTc2ODlmNDcwYTBhN2MwMDFhYWRhMzBlNzgzYjIzODVjYlpmpJs=: 00:17:46.305 08:30:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:46.305 08:30:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:46.305 08:30:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:46.305 08:30:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDA4NjI1MmEzNzZjYWYzNjA3Mzg2NGQ4NGEyOGJiZTc2ODlmNDcwYTBhN2MwMDFhYWRhMzBlNzgzYjIzODVjYlpmpJs=: 00:17:46.305 08:30:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:46.305 08:30:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:17:46.305 08:30:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:46.305 08:30:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:46.305 08:30:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:46.305 08:30:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:46.305 08:30:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:46.305 08:30:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:46.305 08:30:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.305 08:30:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.305 08:30:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.305 08:30:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:46.305 08:30:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:46.305 08:30:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:46.305 08:30:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:46.305 08:30:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:46.305 08:30:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:46.305 08:30:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:46.305 08:30:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:46.305 08:30:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:46.305 08:30:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:46.305 08:30:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:46.305 08:30:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:46.305 08:30:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.305 08:30:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.872 nvme0n1 00:17:46.872 08:30:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.872 08:30:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:46.872 08:30:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:46.872 08:30:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.872 08:30:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.872 08:30:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.872 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.872 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:46.872 08:30:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.872 08:30:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.872 08:30:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.872 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:17:46.872 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:46.872 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:46.872 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:17:46.872 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:46.872 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:46.872 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:46.872 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:46.872 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTIyNmY4MDQzMWRjOWNhMmUyNmY5NThlMTI4Y2EzZTL0cccQ: 00:17:46.872 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTU4Zjc1YWQ4OTNhNzc5MDIzMWNhZjJmMWNiM2NkM2RlNjZiZjMzOTJmODVkMjNiNTgwYzg4YTc3YmY2MWYwYuYzm4I=: 00:17:46.872 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:46.872 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:46.872 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTIyNmY4MDQzMWRjOWNhMmUyNmY5NThlMTI4Y2EzZTL0cccQ: 00:17:46.872 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTU4Zjc1YWQ4OTNhNzc5MDIzMWNhZjJmMWNiM2NkM2RlNjZiZjMzOTJmODVkMjNiNTgwYzg4YTc3YmY2MWYwYuYzm4I=: ]] 00:17:46.872 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTU4Zjc1YWQ4OTNhNzc5MDIzMWNhZjJmMWNiM2NkM2RlNjZiZjMzOTJmODVkMjNiNTgwYzg4YTc3YmY2MWYwYuYzm4I=: 00:17:46.872 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:17:46.872 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:46.872 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:46.872 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:46.872 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:46.872 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:46.872 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:46.872 08:30:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.872 08:30:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.130 08:30:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.130 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:47.130 08:30:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:47.130 08:30:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:47.130 08:30:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:47.130 08:30:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:47.130 08:30:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:47.130 08:30:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:47.130 08:30:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:47.130 08:30:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:47.130 08:30:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:47.130 08:30:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:47.130 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:47.130 08:30:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.130 08:30:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.130 nvme0n1 00:17:47.130 08:30:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.130 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:47.130 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:47.130 08:30:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.130 08:30:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.130 08:30:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.130 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.130 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:47.130 08:30:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.130 08:30:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.130 08:30:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.130 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:47.130 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:17:47.130 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:47.130 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:47.130 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:47.130 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:47.130 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTU4MjEwNjI4NjIyZGZlM2Y1ZTE0OTRjMTAxMzc2NzkxNTlmYzA5YTdjOTA4Y2VjlefXJQ==: 00:17:47.130 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjYwMjYwMWFjNTRmYjkzMzg3YjNlZWU4NDFkNmZhOGQ0ZjUxNTY4MDNlNWQ1MTA24MzLdw==: 00:17:47.130 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:47.130 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:47.130 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTU4MjEwNjI4NjIyZGZlM2Y1ZTE0OTRjMTAxMzc2NzkxNTlmYzA5YTdjOTA4Y2VjlefXJQ==: 00:17:47.130 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjYwMjYwMWFjNTRmYjkzMzg3YjNlZWU4NDFkNmZhOGQ0ZjUxNTY4MDNlNWQ1MTA24MzLdw==: ]] 00:17:47.130 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjYwMjYwMWFjNTRmYjkzMzg3YjNlZWU4NDFkNmZhOGQ0ZjUxNTY4MDNlNWQ1MTA24MzLdw==: 00:17:47.130 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:17:47.130 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:47.130 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:47.130 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:47.130 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:47.130 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:47.130 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:47.130 08:30:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.130 08:30:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.130 08:30:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.130 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:47.130 08:30:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:47.130 08:30:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:47.130 08:30:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:47.130 08:30:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:47.130 08:30:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:47.130 08:30:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:47.130 08:30:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:47.130 08:30:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:47.130 08:30:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:47.130 08:30:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:47.130 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.130 08:30:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.130 08:30:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.389 nvme0n1 00:17:47.389 08:30:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.389 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:47.389 08:30:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.389 08:30:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.389 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:47.389 08:30:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.389 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.389 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:47.389 08:30:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.389 08:30:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.389 08:30:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.389 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:47.389 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:17:47.389 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:47.389 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:47.389 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:47.389 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:47.389 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDEyOGEzNzlkYjA1ODM0NTY5Mjc0YzNlOWRlNjZiYTNMXQeb: 00:17:47.389 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGM5ZmE4YTc0Mzk3NzdhNDhlYTgzMGMwNmVkNDI2YTnROFMU: 00:17:47.389 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:47.389 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:47.389 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDEyOGEzNzlkYjA1ODM0NTY5Mjc0YzNlOWRlNjZiYTNMXQeb: 00:17:47.389 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGM5ZmE4YTc0Mzk3NzdhNDhlYTgzMGMwNmVkNDI2YTnROFMU: ]] 00:17:47.389 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGM5ZmE4YTc0Mzk3NzdhNDhlYTgzMGMwNmVkNDI2YTnROFMU: 00:17:47.389 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:17:47.389 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:47.389 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:47.389 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:47.389 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:47.389 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:47.389 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:47.389 08:30:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.389 08:30:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.389 08:30:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.389 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:47.389 08:30:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:47.389 08:30:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:47.389 08:30:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:47.389 08:30:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:47.389 08:30:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:47.389 08:30:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:47.389 08:30:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:47.389 08:30:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:47.389 08:30:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:47.389 08:30:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:47.389 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.389 08:30:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.389 08:30:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.389 nvme0n1 00:17:47.389 08:30:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.389 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:47.389 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:47.389 08:30:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.389 08:30:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.389 08:30:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.648 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.648 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:47.648 08:30:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.648 08:30:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.648 08:30:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.648 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:47.648 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:17:47.648 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:47.648 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:47.648 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:47.648 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:47.648 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzM2MzBlOTI0MGE4MWRkNWIxNDE3NDU1NzViNWFlNDEzMTFjYTM3YTQ3YTI3ZjI3YBdl8A==: 00:17:47.648 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjFlYTdiMmFhZGY0YThlYTI2ODU4YjMzYjAxZjg2MmLCuZ92: 00:17:47.648 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:47.648 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:47.648 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzM2MzBlOTI0MGE4MWRkNWIxNDE3NDU1NzViNWFlNDEzMTFjYTM3YTQ3YTI3ZjI3YBdl8A==: 00:17:47.648 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjFlYTdiMmFhZGY0YThlYTI2ODU4YjMzYjAxZjg2MmLCuZ92: ]] 00:17:47.648 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjFlYTdiMmFhZGY0YThlYTI2ODU4YjMzYjAxZjg2MmLCuZ92: 00:17:47.648 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:17:47.648 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:47.648 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:47.648 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:47.648 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:47.648 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:47.648 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:47.648 08:30:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.648 08:30:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.648 08:30:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.648 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:47.648 08:30:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:47.648 08:30:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:47.648 08:30:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:47.648 08:30:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:47.648 08:30:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:47.648 08:30:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:47.648 08:30:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:47.648 08:30:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:47.648 08:30:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:47.648 08:30:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:47.648 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:47.649 08:30:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.649 08:30:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.649 nvme0n1 00:17:47.649 08:30:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.649 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:47.649 08:30:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.649 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:47.649 08:30:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.649 08:30:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.649 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.649 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:47.649 08:30:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.649 08:30:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.649 08:30:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.649 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:47.649 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:17:47.649 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:47.649 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:47.649 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:47.649 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:47.649 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDA4NjI1MmEzNzZjYWYzNjA3Mzg2NGQ4NGEyOGJiZTc2ODlmNDcwYTBhN2MwMDFhYWRhMzBlNzgzYjIzODVjYlpmpJs=: 00:17:47.649 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:47.649 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:47.649 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:47.649 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDA4NjI1MmEzNzZjYWYzNjA3Mzg2NGQ4NGEyOGJiZTc2ODlmNDcwYTBhN2MwMDFhYWRhMzBlNzgzYjIzODVjYlpmpJs=: 00:17:47.649 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:47.649 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:17:47.649 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:47.649 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:47.649 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:47.649 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:47.649 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:47.649 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:47.649 08:30:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.649 08:30:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.649 08:30:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.649 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:47.649 08:30:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:47.649 08:30:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:47.649 08:30:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:47.649 08:30:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:47.649 08:30:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:47.649 08:30:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:47.649 08:30:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:47.649 08:30:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:47.649 08:30:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:47.649 08:30:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:47.649 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:47.649 08:30:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.649 08:30:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.908 nvme0n1 00:17:47.908 08:30:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.908 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:47.908 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:47.908 08:30:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.908 08:30:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.908 08:30:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.908 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.908 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:47.908 08:30:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.908 08:30:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.908 08:30:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.908 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:47.908 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:47.908 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:17:47.908 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:47.908 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:47.908 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:47.908 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:47.908 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTIyNmY4MDQzMWRjOWNhMmUyNmY5NThlMTI4Y2EzZTL0cccQ: 00:17:47.908 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTU4Zjc1YWQ4OTNhNzc5MDIzMWNhZjJmMWNiM2NkM2RlNjZiZjMzOTJmODVkMjNiNTgwYzg4YTc3YmY2MWYwYuYzm4I=: 00:17:47.908 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:47.908 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:47.908 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTIyNmY4MDQzMWRjOWNhMmUyNmY5NThlMTI4Y2EzZTL0cccQ: 00:17:47.908 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTU4Zjc1YWQ4OTNhNzc5MDIzMWNhZjJmMWNiM2NkM2RlNjZiZjMzOTJmODVkMjNiNTgwYzg4YTc3YmY2MWYwYuYzm4I=: ]] 00:17:47.908 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTU4Zjc1YWQ4OTNhNzc5MDIzMWNhZjJmMWNiM2NkM2RlNjZiZjMzOTJmODVkMjNiNTgwYzg4YTc3YmY2MWYwYuYzm4I=: 00:17:47.908 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:17:47.908 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:47.908 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:47.908 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:47.908 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:47.908 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:47.908 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:47.908 08:30:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.908 08:30:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.908 08:30:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.908 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:47.908 08:30:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:47.908 08:30:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:47.908 08:30:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:47.908 08:30:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:47.908 08:30:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:47.908 08:30:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:47.908 08:30:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:47.908 08:30:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:47.908 08:30:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:47.908 08:30:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:47.908 08:30:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:47.908 08:30:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.908 08:30:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.166 nvme0n1 00:17:48.166 08:30:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.166 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:48.166 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:48.166 08:30:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.166 08:30:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.166 08:30:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.166 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.166 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:48.166 08:30:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.166 08:30:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.166 08:30:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.166 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:48.166 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:17:48.166 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:48.166 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:48.166 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:48.166 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:48.166 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTU4MjEwNjI4NjIyZGZlM2Y1ZTE0OTRjMTAxMzc2NzkxNTlmYzA5YTdjOTA4Y2VjlefXJQ==: 00:17:48.166 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjYwMjYwMWFjNTRmYjkzMzg3YjNlZWU4NDFkNmZhOGQ0ZjUxNTY4MDNlNWQ1MTA24MzLdw==: 00:17:48.166 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:48.166 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:48.166 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTU4MjEwNjI4NjIyZGZlM2Y1ZTE0OTRjMTAxMzc2NzkxNTlmYzA5YTdjOTA4Y2VjlefXJQ==: 00:17:48.166 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjYwMjYwMWFjNTRmYjkzMzg3YjNlZWU4NDFkNmZhOGQ0ZjUxNTY4MDNlNWQ1MTA24MzLdw==: ]] 00:17:48.166 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjYwMjYwMWFjNTRmYjkzMzg3YjNlZWU4NDFkNmZhOGQ0ZjUxNTY4MDNlNWQ1MTA24MzLdw==: 00:17:48.166 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:17:48.166 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:48.166 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:48.166 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:48.166 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:48.166 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:48.166 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:48.166 08:30:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.166 08:30:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.166 08:30:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.166 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:48.166 08:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:48.166 08:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:48.167 08:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:48.167 08:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:48.167 08:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:48.167 08:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:48.167 08:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:48.167 08:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:48.167 08:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:48.167 08:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:48.167 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.167 08:30:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.167 08:30:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.167 nvme0n1 00:17:48.167 08:30:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.167 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:48.167 08:30:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.167 08:30:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.167 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:48.167 08:30:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.424 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.424 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:48.424 08:30:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.424 08:30:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.424 08:30:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.424 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:48.424 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:17:48.424 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:48.424 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:48.424 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:48.424 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:48.424 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDEyOGEzNzlkYjA1ODM0NTY5Mjc0YzNlOWRlNjZiYTNMXQeb: 00:17:48.424 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGM5ZmE4YTc0Mzk3NzdhNDhlYTgzMGMwNmVkNDI2YTnROFMU: 00:17:48.424 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:48.424 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:48.424 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDEyOGEzNzlkYjA1ODM0NTY5Mjc0YzNlOWRlNjZiYTNMXQeb: 00:17:48.424 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGM5ZmE4YTc0Mzk3NzdhNDhlYTgzMGMwNmVkNDI2YTnROFMU: ]] 00:17:48.424 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGM5ZmE4YTc0Mzk3NzdhNDhlYTgzMGMwNmVkNDI2YTnROFMU: 00:17:48.424 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:17:48.424 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:48.424 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:48.424 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:48.424 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:48.424 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:48.424 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:48.424 08:30:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.424 08:30:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.424 08:30:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.424 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:48.424 08:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:48.424 08:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:48.424 08:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:48.424 08:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:48.424 08:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:48.424 08:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:48.424 08:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:48.424 08:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:48.424 08:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:48.424 08:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:48.424 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:48.424 08:30:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.424 08:30:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.424 nvme0n1 00:17:48.424 08:30:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.424 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:48.424 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:48.424 08:30:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.424 08:30:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.424 08:30:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.424 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.424 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:48.424 08:30:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.424 08:30:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.424 08:30:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.424 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:48.424 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:17:48.424 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:48.424 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:48.424 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:48.424 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:48.424 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzM2MzBlOTI0MGE4MWRkNWIxNDE3NDU1NzViNWFlNDEzMTFjYTM3YTQ3YTI3ZjI3YBdl8A==: 00:17:48.424 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjFlYTdiMmFhZGY0YThlYTI2ODU4YjMzYjAxZjg2MmLCuZ92: 00:17:48.424 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:48.424 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:48.424 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzM2MzBlOTI0MGE4MWRkNWIxNDE3NDU1NzViNWFlNDEzMTFjYTM3YTQ3YTI3ZjI3YBdl8A==: 00:17:48.424 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjFlYTdiMmFhZGY0YThlYTI2ODU4YjMzYjAxZjg2MmLCuZ92: ]] 00:17:48.424 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjFlYTdiMmFhZGY0YThlYTI2ODU4YjMzYjAxZjg2MmLCuZ92: 00:17:48.424 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:17:48.424 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:48.424 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:48.424 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:48.424 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:48.425 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:48.425 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:48.425 08:30:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.425 08:30:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.682 08:30:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.682 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:48.682 08:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:48.682 08:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:48.682 08:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:48.682 08:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:48.682 08:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:48.682 08:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:48.682 08:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:48.682 08:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:48.682 08:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:48.682 08:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:48.682 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:48.682 08:30:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.682 08:30:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.682 nvme0n1 00:17:48.682 08:30:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.682 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:48.682 08:30:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.682 08:30:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.683 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:48.683 08:30:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.683 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.683 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:48.683 08:30:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.683 08:30:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.683 08:30:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.683 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:48.683 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:17:48.683 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:48.683 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:48.683 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:48.683 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:48.683 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDA4NjI1MmEzNzZjYWYzNjA3Mzg2NGQ4NGEyOGJiZTc2ODlmNDcwYTBhN2MwMDFhYWRhMzBlNzgzYjIzODVjYlpmpJs=: 00:17:48.683 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:48.683 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:48.683 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:48.683 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDA4NjI1MmEzNzZjYWYzNjA3Mzg2NGQ4NGEyOGJiZTc2ODlmNDcwYTBhN2MwMDFhYWRhMzBlNzgzYjIzODVjYlpmpJs=: 00:17:48.683 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:48.683 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:17:48.683 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:48.683 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:48.683 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:48.683 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:48.683 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:48.683 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:48.683 08:30:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.683 08:30:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.683 08:30:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.683 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:48.683 08:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:48.683 08:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:48.683 08:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:48.683 08:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:48.683 08:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:48.683 08:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:48.683 08:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:48.683 08:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:48.683 08:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:48.683 08:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:48.683 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:48.683 08:30:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.683 08:30:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.957 nvme0n1 00:17:48.957 08:30:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.957 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:48.957 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:48.957 08:30:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.957 08:30:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.957 08:30:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.957 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.957 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:48.957 08:30:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.957 08:30:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.957 08:30:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.957 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:48.957 08:30:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:48.957 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:17:48.957 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:48.957 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:48.957 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:48.957 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:48.957 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTIyNmY4MDQzMWRjOWNhMmUyNmY5NThlMTI4Y2EzZTL0cccQ: 00:17:48.957 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTU4Zjc1YWQ4OTNhNzc5MDIzMWNhZjJmMWNiM2NkM2RlNjZiZjMzOTJmODVkMjNiNTgwYzg4YTc3YmY2MWYwYuYzm4I=: 00:17:48.957 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:48.957 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:48.958 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTIyNmY4MDQzMWRjOWNhMmUyNmY5NThlMTI4Y2EzZTL0cccQ: 00:17:48.958 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTU4Zjc1YWQ4OTNhNzc5MDIzMWNhZjJmMWNiM2NkM2RlNjZiZjMzOTJmODVkMjNiNTgwYzg4YTc3YmY2MWYwYuYzm4I=: ]] 00:17:48.958 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTU4Zjc1YWQ4OTNhNzc5MDIzMWNhZjJmMWNiM2NkM2RlNjZiZjMzOTJmODVkMjNiNTgwYzg4YTc3YmY2MWYwYuYzm4I=: 00:17:48.958 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:17:48.958 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:48.958 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:48.958 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:48.958 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:48.958 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:48.958 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:48.958 08:30:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.958 08:30:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.958 08:30:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.958 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:48.958 08:30:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:48.958 08:30:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:48.958 08:30:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:48.958 08:30:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:48.958 08:30:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:48.958 08:30:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:48.958 08:30:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:48.958 08:30:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:48.958 08:30:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:48.958 08:30:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:48.958 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:48.958 08:30:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.958 08:30:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.214 nvme0n1 00:17:49.214 08:30:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.214 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:49.214 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:49.214 08:30:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.214 08:30:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.214 08:30:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.214 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.214 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:49.214 08:30:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.214 08:30:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.214 08:30:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.214 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:49.214 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:17:49.214 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:49.214 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:49.214 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:49.214 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:49.214 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTU4MjEwNjI4NjIyZGZlM2Y1ZTE0OTRjMTAxMzc2NzkxNTlmYzA5YTdjOTA4Y2VjlefXJQ==: 00:17:49.214 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjYwMjYwMWFjNTRmYjkzMzg3YjNlZWU4NDFkNmZhOGQ0ZjUxNTY4MDNlNWQ1MTA24MzLdw==: 00:17:49.214 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:49.214 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:49.214 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTU4MjEwNjI4NjIyZGZlM2Y1ZTE0OTRjMTAxMzc2NzkxNTlmYzA5YTdjOTA4Y2VjlefXJQ==: 00:17:49.214 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjYwMjYwMWFjNTRmYjkzMzg3YjNlZWU4NDFkNmZhOGQ0ZjUxNTY4MDNlNWQ1MTA24MzLdw==: ]] 00:17:49.214 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjYwMjYwMWFjNTRmYjkzMzg3YjNlZWU4NDFkNmZhOGQ0ZjUxNTY4MDNlNWQ1MTA24MzLdw==: 00:17:49.214 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:17:49.214 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:49.214 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:49.214 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:49.214 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:49.214 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:49.214 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:49.214 08:30:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.214 08:30:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.214 08:30:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.214 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:49.214 08:30:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:49.214 08:30:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:49.214 08:30:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:49.214 08:30:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:49.214 08:30:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:49.214 08:30:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:49.214 08:30:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:49.214 08:30:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:49.214 08:30:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:49.214 08:30:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:49.214 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:49.214 08:30:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.214 08:30:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.471 nvme0n1 00:17:49.471 08:30:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.471 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:49.471 08:30:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.471 08:30:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.471 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:49.471 08:30:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.471 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.471 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:49.471 08:30:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.471 08:30:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.471 08:30:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.471 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:49.471 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:17:49.471 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:49.471 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:49.471 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:49.471 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:49.471 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDEyOGEzNzlkYjA1ODM0NTY5Mjc0YzNlOWRlNjZiYTNMXQeb: 00:17:49.471 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGM5ZmE4YTc0Mzk3NzdhNDhlYTgzMGMwNmVkNDI2YTnROFMU: 00:17:49.471 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:49.471 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:49.471 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDEyOGEzNzlkYjA1ODM0NTY5Mjc0YzNlOWRlNjZiYTNMXQeb: 00:17:49.471 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGM5ZmE4YTc0Mzk3NzdhNDhlYTgzMGMwNmVkNDI2YTnROFMU: ]] 00:17:49.471 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGM5ZmE4YTc0Mzk3NzdhNDhlYTgzMGMwNmVkNDI2YTnROFMU: 00:17:49.472 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:17:49.472 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:49.472 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:49.472 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:49.472 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:49.472 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:49.472 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:49.472 08:30:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.472 08:30:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.472 08:30:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.472 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:49.472 08:30:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:49.472 08:30:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:49.472 08:30:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:49.472 08:30:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:49.472 08:30:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:49.472 08:30:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:49.472 08:30:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:49.472 08:30:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:49.472 08:30:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:49.472 08:30:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:49.472 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:49.472 08:30:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.472 08:30:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.730 nvme0n1 00:17:49.730 08:30:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.730 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:49.730 08:30:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.730 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:49.730 08:30:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.730 08:30:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.730 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.730 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:49.730 08:30:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.730 08:30:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.730 08:30:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.730 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:49.730 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:17:49.730 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:49.730 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:49.730 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:49.730 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:49.730 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzM2MzBlOTI0MGE4MWRkNWIxNDE3NDU1NzViNWFlNDEzMTFjYTM3YTQ3YTI3ZjI3YBdl8A==: 00:17:49.730 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjFlYTdiMmFhZGY0YThlYTI2ODU4YjMzYjAxZjg2MmLCuZ92: 00:17:49.730 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:49.730 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:49.730 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzM2MzBlOTI0MGE4MWRkNWIxNDE3NDU1NzViNWFlNDEzMTFjYTM3YTQ3YTI3ZjI3YBdl8A==: 00:17:49.730 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjFlYTdiMmFhZGY0YThlYTI2ODU4YjMzYjAxZjg2MmLCuZ92: ]] 00:17:49.730 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjFlYTdiMmFhZGY0YThlYTI2ODU4YjMzYjAxZjg2MmLCuZ92: 00:17:49.730 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:17:49.730 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:49.730 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:49.730 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:49.730 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:49.730 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:49.730 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:49.730 08:30:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.730 08:30:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.730 08:30:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.730 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:49.730 08:30:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:49.731 08:30:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:49.731 08:30:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:49.731 08:30:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:49.731 08:30:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:49.731 08:30:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:49.731 08:30:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:49.731 08:30:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:49.731 08:30:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:49.731 08:30:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:49.731 08:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:49.731 08:30:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.731 08:30:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.989 nvme0n1 00:17:49.989 08:30:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.989 08:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:49.989 08:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:49.989 08:30:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.989 08:30:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.989 08:30:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.989 08:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.989 08:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:49.989 08:30:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.989 08:30:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.989 08:30:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.989 08:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:49.989 08:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:17:49.989 08:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:49.989 08:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:49.989 08:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:49.989 08:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:49.989 08:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDA4NjI1MmEzNzZjYWYzNjA3Mzg2NGQ4NGEyOGJiZTc2ODlmNDcwYTBhN2MwMDFhYWRhMzBlNzgzYjIzODVjYlpmpJs=: 00:17:49.989 08:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:49.989 08:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:49.989 08:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:49.989 08:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDA4NjI1MmEzNzZjYWYzNjA3Mzg2NGQ4NGEyOGJiZTc2ODlmNDcwYTBhN2MwMDFhYWRhMzBlNzgzYjIzODVjYlpmpJs=: 00:17:49.989 08:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:49.989 08:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:17:49.989 08:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:49.989 08:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:49.989 08:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:49.989 08:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:49.989 08:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:49.989 08:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:49.989 08:30:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.989 08:30:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.989 08:30:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.989 08:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:49.989 08:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:49.989 08:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:49.989 08:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:49.989 08:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:49.989 08:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:49.989 08:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:49.989 08:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:49.989 08:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:49.989 08:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:49.989 08:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:49.990 08:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:49.990 08:30:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.990 08:30:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.248 nvme0n1 00:17:50.248 08:30:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.248 08:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:50.248 08:30:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.248 08:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:50.248 08:30:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.248 08:30:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.248 08:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.248 08:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:50.248 08:30:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.248 08:30:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.248 08:30:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.248 08:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:50.248 08:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:50.248 08:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:17:50.248 08:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:50.248 08:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:50.248 08:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:50.248 08:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:50.248 08:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTIyNmY4MDQzMWRjOWNhMmUyNmY5NThlMTI4Y2EzZTL0cccQ: 00:17:50.248 08:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTU4Zjc1YWQ4OTNhNzc5MDIzMWNhZjJmMWNiM2NkM2RlNjZiZjMzOTJmODVkMjNiNTgwYzg4YTc3YmY2MWYwYuYzm4I=: 00:17:50.248 08:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:50.248 08:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:50.248 08:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTIyNmY4MDQzMWRjOWNhMmUyNmY5NThlMTI4Y2EzZTL0cccQ: 00:17:50.248 08:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTU4Zjc1YWQ4OTNhNzc5MDIzMWNhZjJmMWNiM2NkM2RlNjZiZjMzOTJmODVkMjNiNTgwYzg4YTc3YmY2MWYwYuYzm4I=: ]] 00:17:50.248 08:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTU4Zjc1YWQ4OTNhNzc5MDIzMWNhZjJmMWNiM2NkM2RlNjZiZjMzOTJmODVkMjNiNTgwYzg4YTc3YmY2MWYwYuYzm4I=: 00:17:50.248 08:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:17:50.248 08:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:50.248 08:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:50.248 08:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:50.248 08:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:50.248 08:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:50.248 08:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:50.248 08:30:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.248 08:30:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.506 08:30:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.506 08:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:50.506 08:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:50.506 08:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:50.506 08:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:50.506 08:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:50.506 08:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:50.506 08:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:50.506 08:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:50.506 08:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:50.506 08:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:50.506 08:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:50.506 08:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.506 08:30:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.506 08:30:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.764 nvme0n1 00:17:50.764 08:30:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.764 08:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:50.764 08:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:50.764 08:30:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.764 08:30:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.764 08:30:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.764 08:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.764 08:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:50.764 08:30:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.764 08:30:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.764 08:30:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.764 08:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:50.764 08:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:17:50.764 08:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:50.764 08:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:50.764 08:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:50.764 08:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:50.764 08:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTU4MjEwNjI4NjIyZGZlM2Y1ZTE0OTRjMTAxMzc2NzkxNTlmYzA5YTdjOTA4Y2VjlefXJQ==: 00:17:50.764 08:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjYwMjYwMWFjNTRmYjkzMzg3YjNlZWU4NDFkNmZhOGQ0ZjUxNTY4MDNlNWQ1MTA24MzLdw==: 00:17:50.764 08:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:50.764 08:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:50.764 08:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTU4MjEwNjI4NjIyZGZlM2Y1ZTE0OTRjMTAxMzc2NzkxNTlmYzA5YTdjOTA4Y2VjlefXJQ==: 00:17:50.764 08:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjYwMjYwMWFjNTRmYjkzMzg3YjNlZWU4NDFkNmZhOGQ0ZjUxNTY4MDNlNWQ1MTA24MzLdw==: ]] 00:17:50.765 08:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjYwMjYwMWFjNTRmYjkzMzg3YjNlZWU4NDFkNmZhOGQ0ZjUxNTY4MDNlNWQ1MTA24MzLdw==: 00:17:50.765 08:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:17:50.765 08:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:50.765 08:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:50.765 08:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:50.765 08:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:50.765 08:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:50.765 08:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:50.765 08:30:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.765 08:30:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.765 08:30:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.765 08:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:50.765 08:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:50.765 08:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:50.765 08:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:50.765 08:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:50.765 08:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:50.765 08:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:50.765 08:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:50.765 08:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:50.765 08:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:50.765 08:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:50.765 08:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.765 08:30:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.765 08:30:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.331 nvme0n1 00:17:51.331 08:30:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.331 08:30:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:51.331 08:30:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.331 08:30:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:51.331 08:30:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.331 08:30:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.331 08:30:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.331 08:30:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:51.331 08:30:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.331 08:30:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.331 08:30:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.331 08:30:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:51.331 08:30:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:17:51.331 08:30:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:51.331 08:30:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:51.331 08:30:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:51.331 08:30:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:51.331 08:30:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDEyOGEzNzlkYjA1ODM0NTY5Mjc0YzNlOWRlNjZiYTNMXQeb: 00:17:51.331 08:30:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGM5ZmE4YTc0Mzk3NzdhNDhlYTgzMGMwNmVkNDI2YTnROFMU: 00:17:51.331 08:30:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:51.331 08:30:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:51.331 08:30:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDEyOGEzNzlkYjA1ODM0NTY5Mjc0YzNlOWRlNjZiYTNMXQeb: 00:17:51.331 08:30:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGM5ZmE4YTc0Mzk3NzdhNDhlYTgzMGMwNmVkNDI2YTnROFMU: ]] 00:17:51.331 08:30:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGM5ZmE4YTc0Mzk3NzdhNDhlYTgzMGMwNmVkNDI2YTnROFMU: 00:17:51.331 08:30:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:17:51.331 08:30:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:51.331 08:30:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:51.331 08:30:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:51.331 08:30:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:51.331 08:30:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:51.331 08:30:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:51.331 08:30:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.331 08:30:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.331 08:30:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.331 08:30:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:51.331 08:30:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:51.331 08:30:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:51.331 08:30:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:51.331 08:30:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:51.331 08:30:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:51.331 08:30:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:51.331 08:30:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:51.331 08:30:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:51.331 08:30:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:51.331 08:30:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:51.331 08:30:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:51.331 08:30:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.331 08:30:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.632 nvme0n1 00:17:51.632 08:30:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.632 08:30:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:51.632 08:30:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:51.632 08:30:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.632 08:30:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.632 08:30:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.632 08:30:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.632 08:30:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:51.633 08:30:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.633 08:30:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.633 08:30:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.633 08:30:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:51.633 08:30:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:17:51.633 08:30:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:51.633 08:30:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:51.633 08:30:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:51.633 08:30:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:51.633 08:30:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzM2MzBlOTI0MGE4MWRkNWIxNDE3NDU1NzViNWFlNDEzMTFjYTM3YTQ3YTI3ZjI3YBdl8A==: 00:17:51.633 08:30:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjFlYTdiMmFhZGY0YThlYTI2ODU4YjMzYjAxZjg2MmLCuZ92: 00:17:51.633 08:30:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:51.633 08:30:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:51.633 08:30:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzM2MzBlOTI0MGE4MWRkNWIxNDE3NDU1NzViNWFlNDEzMTFjYTM3YTQ3YTI3ZjI3YBdl8A==: 00:17:51.633 08:30:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjFlYTdiMmFhZGY0YThlYTI2ODU4YjMzYjAxZjg2MmLCuZ92: ]] 00:17:51.633 08:30:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjFlYTdiMmFhZGY0YThlYTI2ODU4YjMzYjAxZjg2MmLCuZ92: 00:17:51.633 08:30:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:17:51.633 08:30:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:51.633 08:30:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:51.633 08:30:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:51.633 08:30:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:51.633 08:30:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:51.633 08:30:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:51.633 08:30:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.633 08:30:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.633 08:30:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.633 08:30:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:51.633 08:30:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:51.633 08:30:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:51.633 08:30:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:51.633 08:30:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:51.633 08:30:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:51.633 08:30:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:51.633 08:30:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:51.633 08:30:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:51.633 08:30:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:51.633 08:30:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:51.633 08:30:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:51.633 08:30:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.633 08:30:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.199 nvme0n1 00:17:52.199 08:30:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.199 08:30:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:52.199 08:30:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:52.199 08:30:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.199 08:30:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.199 08:30:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.199 08:30:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.199 08:30:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:52.199 08:30:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.199 08:30:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.199 08:30:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.199 08:30:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:52.199 08:30:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:17:52.199 08:30:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:52.199 08:30:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:52.199 08:30:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:52.199 08:30:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:52.199 08:30:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDA4NjI1MmEzNzZjYWYzNjA3Mzg2NGQ4NGEyOGJiZTc2ODlmNDcwYTBhN2MwMDFhYWRhMzBlNzgzYjIzODVjYlpmpJs=: 00:17:52.199 08:30:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:52.199 08:30:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:52.199 08:30:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:52.199 08:30:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDA4NjI1MmEzNzZjYWYzNjA3Mzg2NGQ4NGEyOGJiZTc2ODlmNDcwYTBhN2MwMDFhYWRhMzBlNzgzYjIzODVjYlpmpJs=: 00:17:52.199 08:30:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:52.199 08:30:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:17:52.199 08:30:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:52.199 08:30:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:52.199 08:30:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:52.199 08:30:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:52.199 08:30:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:52.199 08:30:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:52.199 08:30:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.199 08:30:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.199 08:30:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.199 08:30:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:52.199 08:30:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:52.199 08:30:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:52.199 08:30:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:52.199 08:30:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:52.199 08:30:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:52.199 08:30:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:52.199 08:30:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:52.199 08:30:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:52.199 08:30:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:52.199 08:30:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:52.199 08:30:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:52.199 08:30:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.199 08:30:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.457 nvme0n1 00:17:52.457 08:30:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.457 08:30:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:52.457 08:30:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.457 08:30:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:52.457 08:30:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.457 08:30:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.457 08:30:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.457 08:30:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:52.457 08:30:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.457 08:30:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.457 08:30:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.457 08:30:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:52.457 08:30:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:52.457 08:30:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:17:52.457 08:30:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:52.457 08:30:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:52.457 08:30:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:52.457 08:30:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:52.457 08:30:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTIyNmY4MDQzMWRjOWNhMmUyNmY5NThlMTI4Y2EzZTL0cccQ: 00:17:52.457 08:30:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTU4Zjc1YWQ4OTNhNzc5MDIzMWNhZjJmMWNiM2NkM2RlNjZiZjMzOTJmODVkMjNiNTgwYzg4YTc3YmY2MWYwYuYzm4I=: 00:17:52.457 08:30:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:52.457 08:30:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:52.457 08:30:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTIyNmY4MDQzMWRjOWNhMmUyNmY5NThlMTI4Y2EzZTL0cccQ: 00:17:52.457 08:30:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTU4Zjc1YWQ4OTNhNzc5MDIzMWNhZjJmMWNiM2NkM2RlNjZiZjMzOTJmODVkMjNiNTgwYzg4YTc3YmY2MWYwYuYzm4I=: ]] 00:17:52.457 08:30:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTU4Zjc1YWQ4OTNhNzc5MDIzMWNhZjJmMWNiM2NkM2RlNjZiZjMzOTJmODVkMjNiNTgwYzg4YTc3YmY2MWYwYuYzm4I=: 00:17:52.457 08:30:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:17:52.457 08:30:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:52.457 08:30:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:52.457 08:30:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:52.457 08:30:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:52.457 08:30:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:52.457 08:30:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:52.457 08:30:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.457 08:30:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.457 08:30:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.716 08:30:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:52.716 08:30:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:52.716 08:30:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:52.716 08:30:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:52.716 08:30:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:52.716 08:30:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:52.716 08:30:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:52.716 08:30:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:52.716 08:30:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:52.716 08:30:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:52.716 08:30:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:52.716 08:30:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:52.716 08:30:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.716 08:30:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.283 nvme0n1 00:17:53.283 08:30:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.283 08:30:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:53.283 08:30:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:53.283 08:30:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.283 08:30:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.283 08:30:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.283 08:30:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.283 08:30:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:53.283 08:30:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.283 08:30:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.283 08:30:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.283 08:30:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:53.283 08:30:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:17:53.283 08:30:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:53.283 08:30:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:53.283 08:30:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:53.283 08:30:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:53.283 08:30:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTU4MjEwNjI4NjIyZGZlM2Y1ZTE0OTRjMTAxMzc2NzkxNTlmYzA5YTdjOTA4Y2VjlefXJQ==: 00:17:53.283 08:30:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjYwMjYwMWFjNTRmYjkzMzg3YjNlZWU4NDFkNmZhOGQ0ZjUxNTY4MDNlNWQ1MTA24MzLdw==: 00:17:53.283 08:30:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:53.283 08:30:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:53.283 08:30:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTU4MjEwNjI4NjIyZGZlM2Y1ZTE0OTRjMTAxMzc2NzkxNTlmYzA5YTdjOTA4Y2VjlefXJQ==: 00:17:53.283 08:30:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjYwMjYwMWFjNTRmYjkzMzg3YjNlZWU4NDFkNmZhOGQ0ZjUxNTY4MDNlNWQ1MTA24MzLdw==: ]] 00:17:53.283 08:30:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjYwMjYwMWFjNTRmYjkzMzg3YjNlZWU4NDFkNmZhOGQ0ZjUxNTY4MDNlNWQ1MTA24MzLdw==: 00:17:53.283 08:30:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:17:53.283 08:30:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:53.283 08:30:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:53.283 08:30:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:53.283 08:30:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:53.283 08:30:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:53.283 08:30:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:53.283 08:30:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.283 08:30:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.283 08:30:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.283 08:30:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:53.283 08:30:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:53.283 08:30:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:53.283 08:30:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:53.283 08:30:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:53.283 08:30:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:53.283 08:30:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:53.283 08:30:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:53.283 08:30:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:53.283 08:30:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:53.283 08:30:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:53.283 08:30:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:53.283 08:30:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.283 08:30:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.849 nvme0n1 00:17:53.849 08:30:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.849 08:30:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:53.849 08:30:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.850 08:30:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:53.850 08:30:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.850 08:30:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.850 08:30:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.850 08:30:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:53.850 08:30:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.850 08:30:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.850 08:30:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.850 08:30:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:53.850 08:30:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:17:53.850 08:30:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:53.850 08:30:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:53.850 08:30:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:53.850 08:30:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:53.850 08:30:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDEyOGEzNzlkYjA1ODM0NTY5Mjc0YzNlOWRlNjZiYTNMXQeb: 00:17:53.850 08:30:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGM5ZmE4YTc0Mzk3NzdhNDhlYTgzMGMwNmVkNDI2YTnROFMU: 00:17:53.850 08:30:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:53.850 08:30:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:53.850 08:30:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDEyOGEzNzlkYjA1ODM0NTY5Mjc0YzNlOWRlNjZiYTNMXQeb: 00:17:54.117 08:30:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGM5ZmE4YTc0Mzk3NzdhNDhlYTgzMGMwNmVkNDI2YTnROFMU: ]] 00:17:54.117 08:30:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGM5ZmE4YTc0Mzk3NzdhNDhlYTgzMGMwNmVkNDI2YTnROFMU: 00:17:54.117 08:30:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:17:54.117 08:30:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:54.117 08:30:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:54.117 08:30:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:54.117 08:30:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:54.117 08:30:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:54.117 08:30:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:54.117 08:30:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.117 08:30:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.117 08:30:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.117 08:30:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:54.117 08:30:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:54.117 08:30:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:54.117 08:30:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:54.117 08:30:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:54.117 08:30:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:54.117 08:30:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:54.117 08:30:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:54.117 08:30:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:54.117 08:30:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:54.117 08:30:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:54.117 08:30:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.117 08:30:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.117 08:30:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.699 nvme0n1 00:17:54.699 08:30:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.699 08:30:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:54.699 08:30:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.699 08:30:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.699 08:30:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:54.699 08:30:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.699 08:30:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.699 08:30:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:54.699 08:30:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.699 08:30:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.699 08:30:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.699 08:30:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:54.699 08:30:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:17:54.699 08:30:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:54.699 08:30:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:54.699 08:30:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:54.699 08:30:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:54.699 08:30:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzM2MzBlOTI0MGE4MWRkNWIxNDE3NDU1NzViNWFlNDEzMTFjYTM3YTQ3YTI3ZjI3YBdl8A==: 00:17:54.699 08:30:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjFlYTdiMmFhZGY0YThlYTI2ODU4YjMzYjAxZjg2MmLCuZ92: 00:17:54.699 08:30:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:54.699 08:30:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:54.699 08:30:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzM2MzBlOTI0MGE4MWRkNWIxNDE3NDU1NzViNWFlNDEzMTFjYTM3YTQ3YTI3ZjI3YBdl8A==: 00:17:54.699 08:30:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjFlYTdiMmFhZGY0YThlYTI2ODU4YjMzYjAxZjg2MmLCuZ92: ]] 00:17:54.699 08:30:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjFlYTdiMmFhZGY0YThlYTI2ODU4YjMzYjAxZjg2MmLCuZ92: 00:17:54.699 08:30:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:17:54.699 08:30:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:54.699 08:30:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:54.699 08:30:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:54.699 08:30:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:54.699 08:30:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:54.699 08:30:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:54.699 08:30:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.699 08:30:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.699 08:30:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.699 08:30:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:54.699 08:30:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:54.699 08:30:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:54.699 08:30:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:54.699 08:30:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:54.699 08:30:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:54.699 08:30:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:54.699 08:30:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:54.699 08:30:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:54.699 08:30:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:54.699 08:30:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:54.699 08:30:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:54.699 08:30:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.699 08:30:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.265 nvme0n1 00:17:55.265 08:30:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.265 08:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:55.265 08:30:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.265 08:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:55.265 08:30:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.265 08:30:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.265 08:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.265 08:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:55.265 08:30:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.265 08:30:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.265 08:30:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.265 08:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:55.265 08:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:17:55.265 08:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:55.265 08:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:55.265 08:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:55.265 08:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:55.265 08:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDA4NjI1MmEzNzZjYWYzNjA3Mzg2NGQ4NGEyOGJiZTc2ODlmNDcwYTBhN2MwMDFhYWRhMzBlNzgzYjIzODVjYlpmpJs=: 00:17:55.265 08:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:55.265 08:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:55.265 08:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:55.265 08:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDA4NjI1MmEzNzZjYWYzNjA3Mzg2NGQ4NGEyOGJiZTc2ODlmNDcwYTBhN2MwMDFhYWRhMzBlNzgzYjIzODVjYlpmpJs=: 00:17:55.265 08:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:55.265 08:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:17:55.265 08:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:55.265 08:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:55.265 08:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:55.265 08:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:55.265 08:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:55.265 08:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:55.265 08:30:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.265 08:30:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.265 08:30:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.525 08:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:55.525 08:30:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:55.525 08:30:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:55.525 08:30:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:55.525 08:30:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:55.525 08:30:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:55.525 08:30:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:55.525 08:30:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:55.525 08:30:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:55.525 08:30:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:55.525 08:30:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:55.525 08:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:55.525 08:30:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.525 08:30:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.091 nvme0n1 00:17:56.091 08:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.091 08:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:56.091 08:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:56.091 08:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.091 08:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.091 08:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.091 08:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.091 08:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:56.091 08:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.091 08:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.091 08:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.092 08:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:56.092 08:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:56.092 08:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:56.092 08:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:56.092 08:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:56.092 08:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTU4MjEwNjI4NjIyZGZlM2Y1ZTE0OTRjMTAxMzc2NzkxNTlmYzA5YTdjOTA4Y2VjlefXJQ==: 00:17:56.092 08:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjYwMjYwMWFjNTRmYjkzMzg3YjNlZWU4NDFkNmZhOGQ0ZjUxNTY4MDNlNWQ1MTA24MzLdw==: 00:17:56.092 08:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:56.092 08:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:56.092 08:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTU4MjEwNjI4NjIyZGZlM2Y1ZTE0OTRjMTAxMzc2NzkxNTlmYzA5YTdjOTA4Y2VjlefXJQ==: 00:17:56.092 08:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjYwMjYwMWFjNTRmYjkzMzg3YjNlZWU4NDFkNmZhOGQ0ZjUxNTY4MDNlNWQ1MTA24MzLdw==: ]] 00:17:56.092 08:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjYwMjYwMWFjNTRmYjkzMzg3YjNlZWU4NDFkNmZhOGQ0ZjUxNTY4MDNlNWQ1MTA24MzLdw==: 00:17:56.092 08:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:56.092 08:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.092 08:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.092 08:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.092 08:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:17:56.092 08:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:56.092 08:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:56.092 08:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:56.092 08:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:56.092 08:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:56.092 08:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:56.092 08:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:56.092 08:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:56.092 08:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:56.092 08:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:56.092 08:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:56.092 08:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:17:56.092 08:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:56.092 08:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:17:56.092 08:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:56.092 08:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:17:56.092 08:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:56.092 08:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:56.092 08:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.092 08:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.092 request: 00:17:56.092 { 00:17:56.092 "name": "nvme0", 00:17:56.092 "trtype": "tcp", 00:17:56.092 "traddr": "10.0.0.1", 00:17:56.092 "adrfam": "ipv4", 00:17:56.092 "trsvcid": "4420", 00:17:56.092 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:56.092 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:56.092 "prchk_reftag": false, 00:17:56.092 "prchk_guard": false, 00:17:56.092 "hdgst": false, 00:17:56.092 "ddgst": false, 00:17:56.092 "method": "bdev_nvme_attach_controller", 00:17:56.092 "req_id": 1 00:17:56.092 } 00:17:56.092 Got JSON-RPC error response 00:17:56.092 response: 00:17:56.092 { 00:17:56.092 "code": -5, 00:17:56.092 "message": "Input/output error" 00:17:56.092 } 00:17:56.092 08:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:56.092 08:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:17:56.092 08:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:56.092 08:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:56.092 08:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:56.092 08:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:17:56.092 08:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:17:56.092 08:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.092 08:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.092 08:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.092 08:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:17:56.092 08:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:17:56.092 08:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:56.092 08:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:56.092 08:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:56.092 08:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:56.092 08:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:56.092 08:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:56.092 08:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:56.092 08:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:56.092 08:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:56.092 08:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:56.092 08:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:56.092 08:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:17:56.092 08:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:56.092 08:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:17:56.092 08:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:56.092 08:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:17:56.092 08:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:56.092 08:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:56.092 08:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.092 08:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.351 request: 00:17:56.351 { 00:17:56.351 "name": "nvme0", 00:17:56.351 "trtype": "tcp", 00:17:56.351 "traddr": "10.0.0.1", 00:17:56.351 "adrfam": "ipv4", 00:17:56.351 "trsvcid": "4420", 00:17:56.351 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:56.351 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:56.351 "prchk_reftag": false, 00:17:56.351 "prchk_guard": false, 00:17:56.351 "hdgst": false, 00:17:56.351 "ddgst": false, 00:17:56.351 "dhchap_key": "key2", 00:17:56.351 "method": "bdev_nvme_attach_controller", 00:17:56.351 "req_id": 1 00:17:56.351 } 00:17:56.351 Got JSON-RPC error response 00:17:56.351 response: 00:17:56.351 { 00:17:56.351 "code": -5, 00:17:56.351 "message": "Input/output error" 00:17:56.351 } 00:17:56.351 08:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:56.351 08:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:17:56.351 08:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:56.351 08:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:56.351 08:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:56.351 08:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:17:56.351 08:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.351 08:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:17:56.351 08:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.351 08:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.351 08:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:17:56.351 08:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:17:56.351 08:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:56.351 08:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:56.351 08:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:56.351 08:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:56.351 08:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:56.351 08:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:56.351 08:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:56.351 08:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:56.351 08:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:56.351 08:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:56.351 08:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:56.351 08:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:17:56.351 08:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:56.351 08:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:17:56.351 08:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:56.351 08:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:17:56.351 08:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:56.351 08:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:56.351 08:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.351 08:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.351 request: 00:17:56.351 { 00:17:56.351 "name": "nvme0", 00:17:56.351 "trtype": "tcp", 00:17:56.351 "traddr": "10.0.0.1", 00:17:56.351 "adrfam": "ipv4", 00:17:56.351 "trsvcid": "4420", 00:17:56.351 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:56.351 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:56.351 "prchk_reftag": false, 00:17:56.351 "prchk_guard": false, 00:17:56.351 "hdgst": false, 00:17:56.351 "ddgst": false, 00:17:56.351 "dhchap_key": "key1", 00:17:56.351 "dhchap_ctrlr_key": "ckey2", 00:17:56.351 "method": "bdev_nvme_attach_controller", 00:17:56.351 "req_id": 1 00:17:56.351 } 00:17:56.351 Got JSON-RPC error response 00:17:56.351 response: 00:17:56.351 { 00:17:56.351 "code": -5, 00:17:56.351 "message": "Input/output error" 00:17:56.351 } 00:17:56.351 08:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:56.352 08:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:17:56.352 08:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:56.352 08:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:56.352 08:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:56.352 08:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:17:56.352 08:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:17:56.352 08:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:17:56.352 08:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:56.352 08:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:17:56.352 08:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:56.352 08:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:17:56.352 08:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:56.352 08:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:56.352 rmmod nvme_tcp 00:17:56.352 rmmod nvme_fabrics 00:17:56.352 08:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:56.352 08:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:17:56.352 08:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:17:56.352 08:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 78731 ']' 00:17:56.352 08:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 78731 00:17:56.352 08:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 78731 ']' 00:17:56.352 08:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 78731 00:17:56.352 08:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:17:56.352 08:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:56.352 08:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78731 00:17:56.352 killing process with pid 78731 00:17:56.352 08:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:56.352 08:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:56.352 08:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78731' 00:17:56.352 08:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 78731 00:17:56.352 08:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 78731 00:17:56.610 08:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:56.610 08:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:56.610 08:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:56.610 08:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:56.610 08:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:56.610 08:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:56.611 08:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:56.611 08:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:56.611 08:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:56.611 08:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:17:56.611 08:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:17:56.611 08:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:17:56.611 08:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:17:56.611 08:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:17:56.611 08:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:56.611 08:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:17:56.611 08:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:17:56.611 08:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:56.611 08:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:17:56.611 08:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:17:56.870 08:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:57.465 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:57.465 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:17:57.725 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:17:57.725 08:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.pdX /tmp/spdk.key-null.Vbm /tmp/spdk.key-sha256.hQ2 /tmp/spdk.key-sha384.zIM /tmp/spdk.key-sha512.HgU /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:17:57.725 08:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:57.986 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:57.986 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:57.986 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:57.986 00:17:57.986 real 0m36.794s 00:17:57.986 user 0m33.062s 00:17:57.986 sys 0m3.835s 00:17:57.986 08:30:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:57.986 ************************************ 00:17:57.986 END TEST nvmf_auth_host 00:17:57.986 08:30:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.986 ************************************ 00:17:58.246 08:30:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:58.246 08:30:50 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:17:58.246 08:30:50 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:17:58.246 08:30:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:58.246 08:30:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:58.246 08:30:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:58.246 ************************************ 00:17:58.246 START TEST nvmf_digest 00:17:58.246 ************************************ 00:17:58.246 08:30:50 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:17:58.246 * Looking for test storage... 00:17:58.246 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:58.246 08:30:50 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:58.246 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:17:58.246 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:58.246 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:58.246 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:58.246 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:58.246 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:58.246 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:58.246 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:58.246 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:58.246 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:58.246 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:58.246 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:17:58.246 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:17:58.246 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:58.246 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:58.246 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:58.246 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:58.246 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:58.246 08:30:50 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:58.246 08:30:50 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:58.246 08:30:50 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:58.246 08:30:50 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.246 08:30:50 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.246 08:30:50 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.246 08:30:50 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:17:58.246 08:30:50 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.246 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:17:58.246 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:58.246 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:58.246 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:58.246 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:58.246 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:58.246 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:58.246 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:58.246 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:58.246 08:30:50 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:17:58.246 08:30:50 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:17:58.247 08:30:50 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:17:58.247 08:30:50 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:17:58.247 08:30:50 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:17:58.247 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:58.247 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:58.247 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:58.247 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:58.247 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:58.247 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:58.247 08:30:50 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:58.247 08:30:50 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:58.247 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:58.247 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:58.247 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:58.247 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:58.247 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:58.247 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:58.247 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:58.247 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:58.247 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:58.247 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:58.247 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:58.247 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:58.247 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:58.247 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:58.247 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:58.247 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:58.247 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:58.247 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:58.247 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:58.247 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:58.247 Cannot find device "nvmf_tgt_br" 00:17:58.247 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # true 00:17:58.247 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:58.247 Cannot find device "nvmf_tgt_br2" 00:17:58.247 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # true 00:17:58.247 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:58.247 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:58.247 Cannot find device "nvmf_tgt_br" 00:17:58.247 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # true 00:17:58.247 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:58.247 Cannot find device "nvmf_tgt_br2" 00:17:58.247 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # true 00:17:58.247 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:58.247 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:58.247 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:58.247 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:58.247 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # true 00:17:58.247 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:58.247 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:58.247 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # true 00:17:58.247 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:58.247 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:58.247 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:58.506 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:58.506 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:58.506 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:58.506 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:58.506 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:58.506 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:58.506 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:58.506 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:58.506 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:58.506 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:58.506 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:58.506 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:58.506 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:58.506 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:58.506 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:58.506 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:58.506 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:58.506 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:58.506 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:58.506 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:58.506 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:58.506 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:58.506 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:17:58.506 00:17:58.506 --- 10.0.0.2 ping statistics --- 00:17:58.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:58.506 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:17:58.506 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:58.506 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:58.506 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:17:58.506 00:17:58.506 --- 10.0.0.3 ping statistics --- 00:17:58.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:58.506 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:17:58.506 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:58.506 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:58.506 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:17:58.506 00:17:58.506 --- 10.0.0.1 ping statistics --- 00:17:58.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:58.506 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:17:58.506 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:58.506 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@433 -- # return 0 00:17:58.506 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:58.506 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:58.506 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:58.506 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:58.506 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:58.506 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:58.506 08:30:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:58.506 08:30:50 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:58.506 08:30:50 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:17:58.506 08:30:50 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:17:58.506 08:30:50 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:17:58.506 08:30:50 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:58.506 08:30:50 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:17:58.506 ************************************ 00:17:58.506 START TEST nvmf_digest_clean 00:17:58.506 ************************************ 00:17:58.506 08:30:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:17:58.506 08:30:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:17:58.506 08:30:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:17:58.506 08:30:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:17:58.507 08:30:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:17:58.507 08:30:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:17:58.507 08:30:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:58.507 08:30:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:58.507 08:30:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:58.507 08:30:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=80310 00:17:58.507 08:30:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:17:58.507 08:30:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 80310 00:17:58.507 08:30:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 80310 ']' 00:17:58.507 08:30:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:58.507 08:30:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:58.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:58.507 08:30:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:58.507 08:30:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:58.507 08:30:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:58.765 [2024-07-15 08:30:50.694397] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:58.765 [2024-07-15 08:30:50.694527] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:58.765 [2024-07-15 08:30:50.838993] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:59.023 [2024-07-15 08:30:50.965317] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:59.023 [2024-07-15 08:30:50.965380] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:59.023 [2024-07-15 08:30:50.965394] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:59.023 [2024-07-15 08:30:50.965405] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:59.023 [2024-07-15 08:30:50.965414] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:59.023 [2024-07-15 08:30:50.965442] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:59.590 08:30:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:59.590 08:30:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:17:59.590 08:30:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:59.590 08:30:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:59.590 08:30:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:59.590 08:30:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:59.590 08:30:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:17:59.590 08:30:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:17:59.590 08:30:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:17:59.590 08:30:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.590 08:30:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:59.848 [2024-07-15 08:30:51.801065] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:59.848 null0 00:17:59.848 [2024-07-15 08:30:51.854451] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:59.848 [2024-07-15 08:30:51.878606] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:59.848 08:30:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.848 08:30:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:17:59.848 08:30:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:59.848 08:30:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:59.848 08:30:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:17:59.848 08:30:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:17:59.848 08:30:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:17:59.848 08:30:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:59.848 08:30:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80342 00:17:59.848 08:30:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:17:59.848 08:30:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80342 /var/tmp/bperf.sock 00:17:59.848 08:30:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 80342 ']' 00:17:59.848 08:30:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:59.848 08:30:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:59.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:59.848 08:30:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:59.848 08:30:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:59.848 08:30:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:59.848 [2024-07-15 08:30:51.941369] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:59.848 [2024-07-15 08:30:51.941477] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80342 ] 00:18:00.107 [2024-07-15 08:30:52.084441] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:00.107 [2024-07-15 08:30:52.199333] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:01.040 08:30:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:01.040 08:30:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:18:01.040 08:30:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:18:01.040 08:30:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:18:01.040 08:30:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:18:01.298 [2024-07-15 08:30:53.244882] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:01.298 08:30:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:01.298 08:30:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:01.556 nvme0n1 00:18:01.556 08:30:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:18:01.556 08:30:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:01.814 Running I/O for 2 seconds... 00:18:03.718 00:18:03.718 Latency(us) 00:18:03.718 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:03.718 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:18:03.718 nvme0n1 : 2.01 13848.99 54.10 0.00 0.00 9235.76 7864.32 21567.30 00:18:03.718 =================================================================================================================== 00:18:03.718 Total : 13848.99 54.10 0.00 0.00 9235.76 7864.32 21567.30 00:18:03.718 0 00:18:03.718 08:30:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:18:03.718 08:30:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:18:03.718 08:30:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:18:03.718 08:30:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:18:03.718 | select(.opcode=="crc32c") 00:18:03.718 | "\(.module_name) \(.executed)"' 00:18:03.718 08:30:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:18:03.976 08:30:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:18:03.976 08:30:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:18:03.976 08:30:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:18:03.976 08:30:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:03.976 08:30:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80342 00:18:03.976 08:30:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 80342 ']' 00:18:03.976 08:30:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 80342 00:18:03.977 08:30:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:18:03.977 08:30:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:03.977 08:30:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80342 00:18:03.977 08:30:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:03.977 08:30:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:03.977 killing process with pid 80342 00:18:03.977 08:30:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80342' 00:18:03.977 08:30:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 80342 00:18:03.977 Received shutdown signal, test time was about 2.000000 seconds 00:18:03.977 00:18:03.977 Latency(us) 00:18:03.977 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:03.977 =================================================================================================================== 00:18:03.977 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:03.977 08:30:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 80342 00:18:04.236 08:30:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:18:04.236 08:30:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:18:04.236 08:30:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:18:04.236 08:30:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:18:04.236 08:30:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:18:04.236 08:30:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:18:04.236 08:30:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:18:04.236 08:30:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80402 00:18:04.236 08:30:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80402 /var/tmp/bperf.sock 00:18:04.236 08:30:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 80402 ']' 00:18:04.236 08:30:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:04.236 08:30:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:04.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:04.236 08:30:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:04.236 08:30:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:04.236 08:30:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:18:04.236 08:30:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:04.495 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:04.495 Zero copy mechanism will not be used. 00:18:04.495 [2024-07-15 08:30:56.413523] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:04.495 [2024-07-15 08:30:56.413639] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80402 ] 00:18:04.495 [2024-07-15 08:30:56.555891] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:04.753 [2024-07-15 08:30:56.683388] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:05.321 08:30:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:05.321 08:30:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:18:05.321 08:30:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:18:05.321 08:30:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:18:05.321 08:30:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:18:05.579 [2024-07-15 08:30:57.701481] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:05.838 08:30:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:05.838 08:30:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:06.097 nvme0n1 00:18:06.097 08:30:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:18:06.097 08:30:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:06.097 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:06.097 Zero copy mechanism will not be used. 00:18:06.097 Running I/O for 2 seconds... 00:18:08.629 00:18:08.629 Latency(us) 00:18:08.629 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:08.629 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:18:08.629 nvme0n1 : 2.00 7418.51 927.31 0.00 0.00 2153.28 1876.71 3336.38 00:18:08.629 =================================================================================================================== 00:18:08.629 Total : 7418.51 927.31 0.00 0.00 2153.28 1876.71 3336.38 00:18:08.629 0 00:18:08.629 08:31:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:18:08.629 08:31:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:18:08.629 08:31:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:18:08.629 08:31:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:18:08.629 | select(.opcode=="crc32c") 00:18:08.629 | "\(.module_name) \(.executed)"' 00:18:08.629 08:31:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:18:08.629 08:31:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:18:08.629 08:31:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:18:08.629 08:31:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:18:08.629 08:31:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:08.629 08:31:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80402 00:18:08.629 08:31:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 80402 ']' 00:18:08.629 08:31:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 80402 00:18:08.629 08:31:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:18:08.629 08:31:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:08.629 08:31:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80402 00:18:08.629 08:31:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:08.629 08:31:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:08.629 killing process with pid 80402 00:18:08.629 08:31:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80402' 00:18:08.629 08:31:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 80402 00:18:08.629 Received shutdown signal, test time was about 2.000000 seconds 00:18:08.629 00:18:08.629 Latency(us) 00:18:08.629 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:08.629 =================================================================================================================== 00:18:08.629 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:08.629 08:31:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 80402 00:18:08.889 08:31:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:18:08.889 08:31:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:18:08.889 08:31:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:18:08.889 08:31:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:18:08.889 08:31:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:18:08.889 08:31:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:18:08.889 08:31:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:18:08.889 08:31:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:18:08.889 08:31:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80468 00:18:08.889 08:31:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80468 /var/tmp/bperf.sock 00:18:08.889 08:31:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 80468 ']' 00:18:08.889 08:31:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:08.889 08:31:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:08.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:08.889 08:31:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:08.889 08:31:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:08.889 08:31:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:08.889 [2024-07-15 08:31:00.861820] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:08.889 [2024-07-15 08:31:00.861897] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80468 ] 00:18:08.889 [2024-07-15 08:31:00.992761] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:09.148 [2024-07-15 08:31:01.110136] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:10.084 08:31:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:10.084 08:31:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:18:10.084 08:31:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:18:10.084 08:31:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:18:10.084 08:31:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:18:10.344 [2024-07-15 08:31:02.266684] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:10.344 08:31:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:10.344 08:31:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:10.603 nvme0n1 00:18:10.603 08:31:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:18:10.603 08:31:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:10.603 Running I/O for 2 seconds... 00:18:13.132 00:18:13.132 Latency(us) 00:18:13.132 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:13.132 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:13.132 nvme0n1 : 2.00 15352.69 59.97 0.00 0.00 8329.97 4230.05 16205.27 00:18:13.132 =================================================================================================================== 00:18:13.132 Total : 15352.69 59.97 0.00 0.00 8329.97 4230.05 16205.27 00:18:13.132 0 00:18:13.132 08:31:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:18:13.132 08:31:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:18:13.132 08:31:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:18:13.132 08:31:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:18:13.132 | select(.opcode=="crc32c") 00:18:13.132 | "\(.module_name) \(.executed)"' 00:18:13.132 08:31:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:18:13.132 08:31:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:18:13.132 08:31:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:18:13.132 08:31:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:18:13.132 08:31:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:13.132 08:31:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80468 00:18:13.132 08:31:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 80468 ']' 00:18:13.132 08:31:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 80468 00:18:13.132 08:31:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:18:13.132 08:31:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:13.132 08:31:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80468 00:18:13.132 killing process with pid 80468 00:18:13.132 Received shutdown signal, test time was about 2.000000 seconds 00:18:13.132 00:18:13.132 Latency(us) 00:18:13.132 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:13.132 =================================================================================================================== 00:18:13.132 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:13.132 08:31:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:13.132 08:31:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:13.132 08:31:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80468' 00:18:13.132 08:31:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 80468 00:18:13.132 08:31:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 80468 00:18:13.389 08:31:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:18:13.389 08:31:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:18:13.389 08:31:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:18:13.389 08:31:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:18:13.389 08:31:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:18:13.389 08:31:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:18:13.389 08:31:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:18:13.389 08:31:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80523 00:18:13.389 08:31:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:18:13.389 08:31:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80523 /var/tmp/bperf.sock 00:18:13.389 08:31:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 80523 ']' 00:18:13.389 08:31:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:13.389 08:31:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:13.389 08:31:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:13.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:13.389 08:31:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:13.389 08:31:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:13.389 [2024-07-15 08:31:05.353645] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:13.389 [2024-07-15 08:31:05.353750] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80523 ] 00:18:13.389 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:13.389 Zero copy mechanism will not be used. 00:18:13.389 [2024-07-15 08:31:05.487578] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:13.646 [2024-07-15 08:31:05.606872] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:14.214 08:31:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:14.214 08:31:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:18:14.214 08:31:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:18:14.214 08:31:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:18:14.214 08:31:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:18:14.780 [2024-07-15 08:31:06.698102] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:14.780 08:31:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:14.780 08:31:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:15.038 nvme0n1 00:18:15.038 08:31:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:18:15.038 08:31:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:15.296 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:15.296 Zero copy mechanism will not be used. 00:18:15.296 Running I/O for 2 seconds... 00:18:17.260 00:18:17.260 Latency(us) 00:18:17.260 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:17.260 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:18:17.260 nvme0n1 : 2.00 4721.72 590.21 0.00 0.00 3381.57 2591.65 8043.05 00:18:17.260 =================================================================================================================== 00:18:17.260 Total : 4721.72 590.21 0.00 0.00 3381.57 2591.65 8043.05 00:18:17.260 0 00:18:17.260 08:31:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:18:17.260 08:31:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:18:17.260 08:31:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:18:17.260 08:31:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:18:17.260 08:31:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:18:17.260 | select(.opcode=="crc32c") 00:18:17.260 | "\(.module_name) \(.executed)"' 00:18:17.518 08:31:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:18:17.518 08:31:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:18:17.518 08:31:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:18:17.518 08:31:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:17.518 08:31:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80523 00:18:17.518 08:31:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 80523 ']' 00:18:17.518 08:31:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 80523 00:18:17.518 08:31:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:18:17.518 08:31:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:17.518 08:31:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80523 00:18:17.518 killing process with pid 80523 00:18:17.518 Received shutdown signal, test time was about 2.000000 seconds 00:18:17.518 00:18:17.518 Latency(us) 00:18:17.518 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:17.518 =================================================================================================================== 00:18:17.518 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:17.518 08:31:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:17.518 08:31:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:17.518 08:31:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80523' 00:18:17.518 08:31:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 80523 00:18:17.518 08:31:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 80523 00:18:18.085 08:31:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 80310 00:18:18.085 08:31:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 80310 ']' 00:18:18.085 08:31:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 80310 00:18:18.085 08:31:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:18:18.085 08:31:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:18.085 08:31:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80310 00:18:18.085 killing process with pid 80310 00:18:18.085 08:31:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:18.085 08:31:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:18.085 08:31:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80310' 00:18:18.085 08:31:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 80310 00:18:18.085 08:31:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 80310 00:18:18.085 ************************************ 00:18:18.085 END TEST nvmf_digest_clean 00:18:18.085 ************************************ 00:18:18.085 00:18:18.085 real 0m19.606s 00:18:18.085 user 0m37.841s 00:18:18.085 sys 0m5.325s 00:18:18.085 08:31:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:18.085 08:31:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:18.344 08:31:10 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:18:18.344 08:31:10 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:18:18.344 08:31:10 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:18:18.344 08:31:10 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:18.344 08:31:10 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:18:18.344 ************************************ 00:18:18.344 START TEST nvmf_digest_error 00:18:18.344 ************************************ 00:18:18.344 08:31:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:18:18.344 08:31:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:18:18.344 08:31:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:18.344 08:31:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:18.344 08:31:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:18.344 08:31:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=80616 00:18:18.344 08:31:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 80616 00:18:18.344 08:31:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:18:18.344 08:31:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 80616 ']' 00:18:18.344 08:31:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:18.344 08:31:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:18.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:18.344 08:31:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:18.344 08:31:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:18.344 08:31:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:18.344 [2024-07-15 08:31:10.366486] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:18.344 [2024-07-15 08:31:10.366580] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:18.344 [2024-07-15 08:31:10.507092] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.625 [2024-07-15 08:31:10.651445] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:18.625 [2024-07-15 08:31:10.651507] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:18.625 [2024-07-15 08:31:10.651530] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:18.625 [2024-07-15 08:31:10.651541] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:18.625 [2024-07-15 08:31:10.651550] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:18.625 [2024-07-15 08:31:10.651583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:19.191 08:31:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:19.191 08:31:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:18:19.191 08:31:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:19.191 08:31:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:19.191 08:31:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:19.451 08:31:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:19.451 08:31:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:18:19.451 08:31:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.451 08:31:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:19.451 [2024-07-15 08:31:11.400310] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:18:19.451 08:31:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.451 08:31:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:18:19.451 08:31:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:18:19.451 08:31:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.451 08:31:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:19.451 [2024-07-15 08:31:11.467981] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:19.451 null0 00:18:19.451 [2024-07-15 08:31:11.526731] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:19.451 [2024-07-15 08:31:11.550836] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:19.451 08:31:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.451 08:31:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:18:19.451 08:31:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:19.451 08:31:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:18:19.451 08:31:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:18:19.451 08:31:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:18:19.451 08:31:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80648 00:18:19.451 08:31:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80648 /var/tmp/bperf.sock 00:18:19.451 08:31:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:18:19.451 08:31:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 80648 ']' 00:18:19.451 08:31:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:19.451 08:31:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:19.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:19.451 08:31:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:19.451 08:31:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:19.451 08:31:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:19.451 [2024-07-15 08:31:11.613374] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:19.451 [2024-07-15 08:31:11.613474] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80648 ] 00:18:19.710 [2024-07-15 08:31:11.755339] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:19.969 [2024-07-15 08:31:11.920270] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:19.969 [2024-07-15 08:31:12.001040] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:20.538 08:31:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:20.538 08:31:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:18:20.538 08:31:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:20.538 08:31:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:20.802 08:31:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:20.802 08:31:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.802 08:31:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:20.802 08:31:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.802 08:31:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:20.802 08:31:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:21.368 nvme0n1 00:18:21.368 08:31:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:18:21.368 08:31:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.368 08:31:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:21.368 08:31:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.368 08:31:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:21.368 08:31:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:21.368 Running I/O for 2 seconds... 00:18:21.368 [2024-07-15 08:31:13.444619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:21.368 [2024-07-15 08:31:13.444702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.368 [2024-07-15 08:31:13.444728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.368 [2024-07-15 08:31:13.462484] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:21.368 [2024-07-15 08:31:13.462553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.368 [2024-07-15 08:31:13.462569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.368 [2024-07-15 08:31:13.480653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:21.368 [2024-07-15 08:31:13.480714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.368 [2024-07-15 08:31:13.480741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.368 [2024-07-15 08:31:13.498744] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:21.368 [2024-07-15 08:31:13.498807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.368 [2024-07-15 08:31:13.498822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.368 [2024-07-15 08:31:13.516976] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:21.368 [2024-07-15 08:31:13.517065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.368 [2024-07-15 08:31:13.517082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.368 [2024-07-15 08:31:13.534885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:21.368 [2024-07-15 08:31:13.534965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.368 [2024-07-15 08:31:13.534982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.626 [2024-07-15 08:31:13.552600] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:21.626 [2024-07-15 08:31:13.552672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.626 [2024-07-15 08:31:13.552687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.626 [2024-07-15 08:31:13.570044] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:21.626 [2024-07-15 08:31:13.570128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.626 [2024-07-15 08:31:13.570153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.626 [2024-07-15 08:31:13.587383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:21.626 [2024-07-15 08:31:13.587455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:20809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.626 [2024-07-15 08:31:13.587471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.626 [2024-07-15 08:31:13.605316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:21.626 [2024-07-15 08:31:13.605398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:8669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.626 [2024-07-15 08:31:13.605413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.626 [2024-07-15 08:31:13.623054] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:21.626 [2024-07-15 08:31:13.623111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:15029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.626 [2024-07-15 08:31:13.623127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.627 [2024-07-15 08:31:13.640656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:21.627 [2024-07-15 08:31:13.640712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:8925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.627 [2024-07-15 08:31:13.640740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.627 [2024-07-15 08:31:13.658256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:21.627 [2024-07-15 08:31:13.658329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:1344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.627 [2024-07-15 08:31:13.658345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.627 [2024-07-15 08:31:13.675873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:21.627 [2024-07-15 08:31:13.675952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:8546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.627 [2024-07-15 08:31:13.675968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.627 [2024-07-15 08:31:13.693208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:21.627 [2024-07-15 08:31:13.693285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:13853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.627 [2024-07-15 08:31:13.693299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.627 [2024-07-15 08:31:13.710982] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:21.627 [2024-07-15 08:31:13.711041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:5585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.627 [2024-07-15 08:31:13.711055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.627 [2024-07-15 08:31:13.729193] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:21.627 [2024-07-15 08:31:13.729280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:5728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.627 [2024-07-15 08:31:13.729295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.627 [2024-07-15 08:31:13.747438] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:21.627 [2024-07-15 08:31:13.747510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:1614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.627 [2024-07-15 08:31:13.747525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.627 [2024-07-15 08:31:13.765839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:21.627 [2024-07-15 08:31:13.765916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:24437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.627 [2024-07-15 08:31:13.765932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.627 [2024-07-15 08:31:13.784492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:21.627 [2024-07-15 08:31:13.784567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:25252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.627 [2024-07-15 08:31:13.784583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.884 [2024-07-15 08:31:13.802100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:21.884 [2024-07-15 08:31:13.802191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:3717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.884 [2024-07-15 08:31:13.802206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.884 [2024-07-15 08:31:13.819957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:21.884 [2024-07-15 08:31:13.820018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:24378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.884 [2024-07-15 08:31:13.820033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.884 [2024-07-15 08:31:13.837608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:21.884 [2024-07-15 08:31:13.837683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:4850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.884 [2024-07-15 08:31:13.837698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.884 [2024-07-15 08:31:13.855003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:21.884 [2024-07-15 08:31:13.855062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:11011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.884 [2024-07-15 08:31:13.855077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.884 [2024-07-15 08:31:13.872813] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:21.884 [2024-07-15 08:31:13.872877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:6076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.884 [2024-07-15 08:31:13.872892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.884 [2024-07-15 08:31:13.890345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:21.885 [2024-07-15 08:31:13.890426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:8300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.885 [2024-07-15 08:31:13.890448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.885 [2024-07-15 08:31:13.907932] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:21.885 [2024-07-15 08:31:13.907996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:6136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.885 [2024-07-15 08:31:13.908011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.885 [2024-07-15 08:31:13.926286] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:21.885 [2024-07-15 08:31:13.926391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:7502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.885 [2024-07-15 08:31:13.926406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.885 [2024-07-15 08:31:13.945236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:21.885 [2024-07-15 08:31:13.945357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:1597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.885 [2024-07-15 08:31:13.945374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.885 [2024-07-15 08:31:13.963471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:21.885 [2024-07-15 08:31:13.963540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:4573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.885 [2024-07-15 08:31:13.963555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.885 [2024-07-15 08:31:13.981425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:21.885 [2024-07-15 08:31:13.981477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:25368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.885 [2024-07-15 08:31:13.981492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.885 [2024-07-15 08:31:13.999281] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:21.885 [2024-07-15 08:31:13.999335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:23190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.885 [2024-07-15 08:31:13.999350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.885 [2024-07-15 08:31:14.017057] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:21.885 [2024-07-15 08:31:14.017117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:11895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.885 [2024-07-15 08:31:14.017132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.885 [2024-07-15 08:31:14.034745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:21.885 [2024-07-15 08:31:14.034825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:24884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.885 [2024-07-15 08:31:14.034840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.885 [2024-07-15 08:31:14.052155] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:21.885 [2024-07-15 08:31:14.052242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:18457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.885 [2024-07-15 08:31:14.052257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.143 [2024-07-15 08:31:14.069839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:22.143 [2024-07-15 08:31:14.069907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:1178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.143 [2024-07-15 08:31:14.069922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.143 [2024-07-15 08:31:14.087180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:22.143 [2024-07-15 08:31:14.087240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:20227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.143 [2024-07-15 08:31:14.087254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.144 [2024-07-15 08:31:14.104558] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:22.144 [2024-07-15 08:31:14.104625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:10184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.144 [2024-07-15 08:31:14.104640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.144 [2024-07-15 08:31:14.121913] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:22.144 [2024-07-15 08:31:14.121979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:17387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.144 [2024-07-15 08:31:14.121994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.144 [2024-07-15 08:31:14.139377] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:22.144 [2024-07-15 08:31:14.139443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:1992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.144 [2024-07-15 08:31:14.139458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.144 [2024-07-15 08:31:14.156772] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:22.144 [2024-07-15 08:31:14.156839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.144 [2024-07-15 08:31:14.156853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.144 [2024-07-15 08:31:14.174247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:22.144 [2024-07-15 08:31:14.174321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:12448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.144 [2024-07-15 08:31:14.174337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.144 [2024-07-15 08:31:14.192058] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:22.144 [2024-07-15 08:31:14.192131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:23221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.144 [2024-07-15 08:31:14.192147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.144 [2024-07-15 08:31:14.209606] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:22.144 [2024-07-15 08:31:14.209691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.144 [2024-07-15 08:31:14.209707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.144 [2024-07-15 08:31:14.226977] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:22.144 [2024-07-15 08:31:14.227034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:1649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.144 [2024-07-15 08:31:14.227048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.144 [2024-07-15 08:31:14.244381] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:22.144 [2024-07-15 08:31:14.244448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:5750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.144 [2024-07-15 08:31:14.244463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.144 [2024-07-15 08:31:14.261831] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:22.144 [2024-07-15 08:31:14.261901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:5575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.144 [2024-07-15 08:31:14.261915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.144 [2024-07-15 08:31:14.279199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:22.144 [2024-07-15 08:31:14.279263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:12500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.144 [2024-07-15 08:31:14.279287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.144 [2024-07-15 08:31:14.296466] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:22.144 [2024-07-15 08:31:14.296529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:13180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.144 [2024-07-15 08:31:14.296543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.144 [2024-07-15 08:31:14.313690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:22.144 [2024-07-15 08:31:14.313765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:9647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.144 [2024-07-15 08:31:14.313781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.403 [2024-07-15 08:31:14.331070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:22.403 [2024-07-15 08:31:14.331139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:11687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.403 [2024-07-15 08:31:14.331153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.403 [2024-07-15 08:31:14.348478] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:22.403 [2024-07-15 08:31:14.348553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:3841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.403 [2024-07-15 08:31:14.348568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.403 [2024-07-15 08:31:14.365899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:22.403 [2024-07-15 08:31:14.365977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.403 [2024-07-15 08:31:14.365994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.403 [2024-07-15 08:31:14.383782] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:22.403 [2024-07-15 08:31:14.383865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:3683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.403 [2024-07-15 08:31:14.383881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.403 [2024-07-15 08:31:14.401538] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:22.403 [2024-07-15 08:31:14.401620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:14344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.403 [2024-07-15 08:31:14.401636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.403 [2024-07-15 08:31:14.418832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:22.403 [2024-07-15 08:31:14.418898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:5879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.403 [2024-07-15 08:31:14.418913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.403 [2024-07-15 08:31:14.436265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:22.403 [2024-07-15 08:31:14.436337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:21245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.403 [2024-07-15 08:31:14.436352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.403 [2024-07-15 08:31:14.453661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:22.403 [2024-07-15 08:31:14.453741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:10842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.403 [2024-07-15 08:31:14.453757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.403 [2024-07-15 08:31:14.470983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:22.403 [2024-07-15 08:31:14.471047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:7628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.403 [2024-07-15 08:31:14.471062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.403 [2024-07-15 08:31:14.488318] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:22.403 [2024-07-15 08:31:14.488399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:19605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.403 [2024-07-15 08:31:14.488415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.403 [2024-07-15 08:31:14.506155] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:22.403 [2024-07-15 08:31:14.506240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:6844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.403 [2024-07-15 08:31:14.506255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.403 [2024-07-15 08:31:14.523870] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:22.403 [2024-07-15 08:31:14.523955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:21259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.403 [2024-07-15 08:31:14.523971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.403 [2024-07-15 08:31:14.541627] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:22.403 [2024-07-15 08:31:14.541713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:5385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.403 [2024-07-15 08:31:14.541742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.403 [2024-07-15 08:31:14.566813] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:22.403 [2024-07-15 08:31:14.566895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:25195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.403 [2024-07-15 08:31:14.566912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.663 [2024-07-15 08:31:14.584308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:22.663 [2024-07-15 08:31:14.584386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:23119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.663 [2024-07-15 08:31:14.584402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.663 [2024-07-15 08:31:14.601767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:22.663 [2024-07-15 08:31:14.601841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:1950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.663 [2024-07-15 08:31:14.601857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.663 [2024-07-15 08:31:14.619122] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:22.663 [2024-07-15 08:31:14.619195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:12033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.663 [2024-07-15 08:31:14.619211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.663 [2024-07-15 08:31:14.636437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:22.663 [2024-07-15 08:31:14.636507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:10169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.663 [2024-07-15 08:31:14.636522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.663 [2024-07-15 08:31:14.653813] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:22.663 [2024-07-15 08:31:14.653895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:15825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.663 [2024-07-15 08:31:14.653911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.663 [2024-07-15 08:31:14.671420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:22.663 [2024-07-15 08:31:14.671492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:7914 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.663 [2024-07-15 08:31:14.671508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.663 [2024-07-15 08:31:14.688778] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:22.663 [2024-07-15 08:31:14.688844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:17958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.663 [2024-07-15 08:31:14.688859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.663 [2024-07-15 08:31:14.706199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:22.663 [2024-07-15 08:31:14.706276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:19054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.663 [2024-07-15 08:31:14.706293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.663 [2024-07-15 08:31:14.723709] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:22.663 [2024-07-15 08:31:14.723793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:20876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.663 [2024-07-15 08:31:14.723808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.663 [2024-07-15 08:31:14.741599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:22.663 [2024-07-15 08:31:14.741681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:13688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.663 [2024-07-15 08:31:14.741697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.664 [2024-07-15 08:31:14.758949] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:22.664 [2024-07-15 08:31:14.758991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:10271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.664 [2024-07-15 08:31:14.759006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.664 [2024-07-15 08:31:14.776277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:22.664 [2024-07-15 08:31:14.776328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:20777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.664 [2024-07-15 08:31:14.776342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.664 [2024-07-15 08:31:14.793924] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:22.664 [2024-07-15 08:31:14.793982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:9215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.664 [2024-07-15 08:31:14.794004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.664 [2024-07-15 08:31:14.811850] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:22.664 [2024-07-15 08:31:14.811942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:17277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.664 [2024-07-15 08:31:14.811958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.664 [2024-07-15 08:31:14.829401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:22.664 [2024-07-15 08:31:14.829472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:15293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.664 [2024-07-15 08:31:14.829487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.923 [2024-07-15 08:31:14.846813] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:22.923 [2024-07-15 08:31:14.846886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:19771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.923 [2024-07-15 08:31:14.846902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.923 [2024-07-15 08:31:14.864161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:22.923 [2024-07-15 08:31:14.864224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:25553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.923 [2024-07-15 08:31:14.864239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.923 [2024-07-15 08:31:14.881556] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:22.923 [2024-07-15 08:31:14.881628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:21219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.923 [2024-07-15 08:31:14.881644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.923 [2024-07-15 08:31:14.898916] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:22.923 [2024-07-15 08:31:14.898985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.923 [2024-07-15 08:31:14.899000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.923 [2024-07-15 08:31:14.916233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:22.923 [2024-07-15 08:31:14.916301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:15521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.923 [2024-07-15 08:31:14.916317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.923 [2024-07-15 08:31:14.933595] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:22.923 [2024-07-15 08:31:14.933664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:15842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.924 [2024-07-15 08:31:14.933680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.924 [2024-07-15 08:31:14.951303] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:22.924 [2024-07-15 08:31:14.951386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:17727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.924 [2024-07-15 08:31:14.951403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.924 [2024-07-15 08:31:14.968753] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:22.924 [2024-07-15 08:31:14.968823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:12584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.924 [2024-07-15 08:31:14.968838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.924 [2024-07-15 08:31:14.986336] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:22.924 [2024-07-15 08:31:14.986411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:1037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.924 [2024-07-15 08:31:14.986427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.924 [2024-07-15 08:31:15.004241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:22.924 [2024-07-15 08:31:15.004328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:13114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.924 [2024-07-15 08:31:15.004343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.924 [2024-07-15 08:31:15.022133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:22.924 [2024-07-15 08:31:15.022210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:23003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.924 [2024-07-15 08:31:15.022225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.924 [2024-07-15 08:31:15.039546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:22.924 [2024-07-15 08:31:15.039618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:2188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.924 [2024-07-15 08:31:15.039633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.924 [2024-07-15 08:31:15.057091] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:22.924 [2024-07-15 08:31:15.057162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:24798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.924 [2024-07-15 08:31:15.057177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.924 [2024-07-15 08:31:15.074495] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:22.924 [2024-07-15 08:31:15.074565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:10007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.924 [2024-07-15 08:31:15.074580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.924 [2024-07-15 08:31:15.091776] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:22.924 [2024-07-15 08:31:15.091828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:17323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.924 [2024-07-15 08:31:15.091843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.183 [2024-07-15 08:31:15.109113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:23.183 [2024-07-15 08:31:15.109169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:9412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.183 [2024-07-15 08:31:15.109184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.183 [2024-07-15 08:31:15.126849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:23.183 [2024-07-15 08:31:15.126928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:1827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.184 [2024-07-15 08:31:15.126942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.184 [2024-07-15 08:31:15.144570] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:23.184 [2024-07-15 08:31:15.144647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:16611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.184 [2024-07-15 08:31:15.144663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.184 [2024-07-15 08:31:15.161892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:23.184 [2024-07-15 08:31:15.161967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:17243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.184 [2024-07-15 08:31:15.161983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.184 [2024-07-15 08:31:15.179222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:23.184 [2024-07-15 08:31:15.179295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:3219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.184 [2024-07-15 08:31:15.179311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.184 [2024-07-15 08:31:15.196646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:23.184 [2024-07-15 08:31:15.196694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:8090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.184 [2024-07-15 08:31:15.196708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.184 [2024-07-15 08:31:15.214020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:23.184 [2024-07-15 08:31:15.214063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:7738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.184 [2024-07-15 08:31:15.214078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.184 [2024-07-15 08:31:15.231441] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:23.184 [2024-07-15 08:31:15.231488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:21824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.184 [2024-07-15 08:31:15.231503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.184 [2024-07-15 08:31:15.249132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:23.184 [2024-07-15 08:31:15.249192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:20460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.184 [2024-07-15 08:31:15.249207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.184 [2024-07-15 08:31:15.266402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:23.184 [2024-07-15 08:31:15.266451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:7606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.184 [2024-07-15 08:31:15.266465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.184 [2024-07-15 08:31:15.283711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:23.184 [2024-07-15 08:31:15.283765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:8423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.184 [2024-07-15 08:31:15.283779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.184 [2024-07-15 08:31:15.301113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:23.184 [2024-07-15 08:31:15.301176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.184 [2024-07-15 08:31:15.301192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.184 [2024-07-15 08:31:15.318433] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:23.184 [2024-07-15 08:31:15.318488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.184 [2024-07-15 08:31:15.318503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.184 [2024-07-15 08:31:15.335712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:23.184 [2024-07-15 08:31:15.335767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.184 [2024-07-15 08:31:15.335781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.184 [2024-07-15 08:31:15.353047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:23.184 [2024-07-15 08:31:15.353090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.184 [2024-07-15 08:31:15.353104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.443 [2024-07-15 08:31:15.370605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:23.443 [2024-07-15 08:31:15.370673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:2935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.443 [2024-07-15 08:31:15.370689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.443 [2024-07-15 08:31:15.388384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:23.443 [2024-07-15 08:31:15.388449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:1790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.443 [2024-07-15 08:31:15.388465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.443 [2024-07-15 08:31:15.405856] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:23.443 [2024-07-15 08:31:15.405906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:19971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.443 [2024-07-15 08:31:15.405921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.443 [2024-07-15 08:31:15.422871] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1620020) 00:18:23.443 [2024-07-15 08:31:15.422935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:15970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.443 [2024-07-15 08:31:15.422950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.443 00:18:23.443 Latency(us) 00:18:23.443 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:23.443 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:18:23.443 nvme0n1 : 2.01 14363.64 56.11 0.00 0.00 8903.47 8221.79 33840.41 00:18:23.443 =================================================================================================================== 00:18:23.443 Total : 14363.64 56.11 0.00 0.00 8903.47 8221.79 33840.41 00:18:23.443 0 00:18:23.443 08:31:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:23.443 08:31:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:23.443 | .driver_specific 00:18:23.443 | .nvme_error 00:18:23.443 | .status_code 00:18:23.443 | .command_transient_transport_error' 00:18:23.443 08:31:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:23.443 08:31:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:23.703 08:31:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 113 > 0 )) 00:18:23.703 08:31:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80648 00:18:23.703 08:31:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 80648 ']' 00:18:23.703 08:31:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 80648 00:18:23.703 08:31:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:18:23.703 08:31:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:23.703 08:31:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80648 00:18:23.703 killing process with pid 80648 00:18:23.703 Received shutdown signal, test time was about 2.000000 seconds 00:18:23.703 00:18:23.703 Latency(us) 00:18:23.703 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:23.703 =================================================================================================================== 00:18:23.703 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:23.703 08:31:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:23.703 08:31:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:23.703 08:31:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80648' 00:18:23.703 08:31:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 80648 00:18:23.703 08:31:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 80648 00:18:23.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:23.963 08:31:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:18:23.963 08:31:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:23.963 08:31:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:18:23.963 08:31:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:18:23.963 08:31:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:18:23.963 08:31:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80711 00:18:23.963 08:31:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80711 /var/tmp/bperf.sock 00:18:23.963 08:31:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 80711 ']' 00:18:23.963 08:31:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:23.963 08:31:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:18:23.963 08:31:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:23.963 08:31:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:23.963 08:31:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:23.963 08:31:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:24.222 [2024-07-15 08:31:16.181453] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:24.222 [2024-07-15 08:31:16.182042] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-aI/O size of 131072 is greater than zero copy threshold (65536). 00:18:24.222 Zero copy mechanism will not be used. 00:18:24.222 llocations --file-prefix=spdk_pid80711 ] 00:18:24.222 [2024-07-15 08:31:16.333897] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:24.481 [2024-07-15 08:31:16.483018] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:24.481 [2024-07-15 08:31:16.556981] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:25.048 08:31:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:25.048 08:31:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:18:25.048 08:31:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:25.048 08:31:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:25.305 08:31:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:25.305 08:31:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.305 08:31:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:25.305 08:31:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.305 08:31:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:25.305 08:31:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:25.563 nvme0n1 00:18:25.563 08:31:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:18:25.563 08:31:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.563 08:31:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:25.563 08:31:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.563 08:31:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:25.563 08:31:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:25.821 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:25.821 Zero copy mechanism will not be used. 00:18:25.821 Running I/O for 2 seconds... 00:18:25.821 [2024-07-15 08:31:17.868061] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:25.821 [2024-07-15 08:31:17.868149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.821 [2024-07-15 08:31:17.868167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:25.821 [2024-07-15 08:31:17.873266] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:25.821 [2024-07-15 08:31:17.873308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.821 [2024-07-15 08:31:17.873329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:25.821 [2024-07-15 08:31:17.878501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:25.821 [2024-07-15 08:31:17.878549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.821 [2024-07-15 08:31:17.878564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:25.821 [2024-07-15 08:31:17.883654] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:25.822 [2024-07-15 08:31:17.883695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.822 [2024-07-15 08:31:17.883710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:25.822 [2024-07-15 08:31:17.888677] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:25.822 [2024-07-15 08:31:17.888737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.822 [2024-07-15 08:31:17.888759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:25.822 [2024-07-15 08:31:17.893890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:25.822 [2024-07-15 08:31:17.893931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.822 [2024-07-15 08:31:17.893945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:25.822 [2024-07-15 08:31:17.899160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:25.822 [2024-07-15 08:31:17.899214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.822 [2024-07-15 08:31:17.899229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:25.822 [2024-07-15 08:31:17.904350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:25.822 [2024-07-15 08:31:17.904392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.822 [2024-07-15 08:31:17.904413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:25.822 [2024-07-15 08:31:17.909673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:25.822 [2024-07-15 08:31:17.909734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.822 [2024-07-15 08:31:17.909751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:25.822 [2024-07-15 08:31:17.915009] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:25.822 [2024-07-15 08:31:17.915052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.822 [2024-07-15 08:31:17.915066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:25.822 [2024-07-15 08:31:17.920149] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:25.822 [2024-07-15 08:31:17.920190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.822 [2024-07-15 08:31:17.920205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:25.822 [2024-07-15 08:31:17.925251] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:25.822 [2024-07-15 08:31:17.925292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.822 [2024-07-15 08:31:17.925307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:25.822 [2024-07-15 08:31:17.930399] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:25.822 [2024-07-15 08:31:17.930443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.822 [2024-07-15 08:31:17.930458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:25.822 [2024-07-15 08:31:17.935347] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:25.822 [2024-07-15 08:31:17.935394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.822 [2024-07-15 08:31:17.935411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:25.822 [2024-07-15 08:31:17.940310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:25.822 [2024-07-15 08:31:17.940353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.822 [2024-07-15 08:31:17.940368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:25.822 [2024-07-15 08:31:17.945393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:25.822 [2024-07-15 08:31:17.945434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.822 [2024-07-15 08:31:17.945449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:25.822 [2024-07-15 08:31:17.950431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:25.822 [2024-07-15 08:31:17.950472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.822 [2024-07-15 08:31:17.950486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:25.822 [2024-07-15 08:31:17.955602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:25.822 [2024-07-15 08:31:17.955642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.822 [2024-07-15 08:31:17.955656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:25.822 [2024-07-15 08:31:17.960642] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:25.822 [2024-07-15 08:31:17.960684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.822 [2024-07-15 08:31:17.960698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:25.822 [2024-07-15 08:31:17.965642] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:25.822 [2024-07-15 08:31:17.965686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.822 [2024-07-15 08:31:17.965701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:25.822 [2024-07-15 08:31:17.970614] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:25.822 [2024-07-15 08:31:17.970661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.822 [2024-07-15 08:31:17.970676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:25.822 [2024-07-15 08:31:17.975631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:25.822 [2024-07-15 08:31:17.975671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.822 [2024-07-15 08:31:17.975686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:25.822 [2024-07-15 08:31:17.980661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:25.822 [2024-07-15 08:31:17.980704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.822 [2024-07-15 08:31:17.980732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:25.822 [2024-07-15 08:31:17.985872] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:25.822 [2024-07-15 08:31:17.985930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.822 [2024-07-15 08:31:17.985946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:25.822 [2024-07-15 08:31:17.990990] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:25.822 [2024-07-15 08:31:17.991032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.822 [2024-07-15 08:31:17.991047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:26.083 [2024-07-15 08:31:17.996296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.083 [2024-07-15 08:31:17.996348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.083 [2024-07-15 08:31:17.996363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:26.083 [2024-07-15 08:31:18.001448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.083 [2024-07-15 08:31:18.001491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.083 [2024-07-15 08:31:18.001506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:26.083 [2024-07-15 08:31:18.006657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.083 [2024-07-15 08:31:18.006699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.083 [2024-07-15 08:31:18.006714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:26.083 [2024-07-15 08:31:18.011885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.083 [2024-07-15 08:31:18.011924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.083 [2024-07-15 08:31:18.011954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:26.083 [2024-07-15 08:31:18.017091] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.083 [2024-07-15 08:31:18.017146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.083 [2024-07-15 08:31:18.017161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:26.083 [2024-07-15 08:31:18.022751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.083 [2024-07-15 08:31:18.022795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.083 [2024-07-15 08:31:18.022810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:26.083 [2024-07-15 08:31:18.027841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.083 [2024-07-15 08:31:18.027881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.083 [2024-07-15 08:31:18.027895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:26.083 [2024-07-15 08:31:18.032739] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.083 [2024-07-15 08:31:18.032779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.083 [2024-07-15 08:31:18.032793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:26.083 [2024-07-15 08:31:18.037678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.083 [2024-07-15 08:31:18.037734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.083 [2024-07-15 08:31:18.037750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:26.083 [2024-07-15 08:31:18.042644] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.083 [2024-07-15 08:31:18.042685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.083 [2024-07-15 08:31:18.042699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:26.083 [2024-07-15 08:31:18.047631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.083 [2024-07-15 08:31:18.047673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.083 [2024-07-15 08:31:18.047688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:26.083 [2024-07-15 08:31:18.052600] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.083 [2024-07-15 08:31:18.052642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.083 [2024-07-15 08:31:18.052656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:26.083 [2024-07-15 08:31:18.057687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.083 [2024-07-15 08:31:18.057739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.083 [2024-07-15 08:31:18.057754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:26.083 [2024-07-15 08:31:18.063011] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.083 [2024-07-15 08:31:18.063056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.083 [2024-07-15 08:31:18.063071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:26.083 [2024-07-15 08:31:18.068289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.083 [2024-07-15 08:31:18.068338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.083 [2024-07-15 08:31:18.068353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:26.083 [2024-07-15 08:31:18.073462] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.083 [2024-07-15 08:31:18.073513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.083 [2024-07-15 08:31:18.073529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:26.083 [2024-07-15 08:31:18.078753] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.083 [2024-07-15 08:31:18.078804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.083 [2024-07-15 08:31:18.078820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:26.083 [2024-07-15 08:31:18.083938] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.083 [2024-07-15 08:31:18.083989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.083 [2024-07-15 08:31:18.084005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:26.083 [2024-07-15 08:31:18.089031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.083 [2024-07-15 08:31:18.089082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.083 [2024-07-15 08:31:18.089098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:26.083 [2024-07-15 08:31:18.094242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.083 [2024-07-15 08:31:18.094284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.083 [2024-07-15 08:31:18.094298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:26.083 [2024-07-15 08:31:18.099341] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.083 [2024-07-15 08:31:18.099383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.083 [2024-07-15 08:31:18.099408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:26.083 [2024-07-15 08:31:18.104425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.083 [2024-07-15 08:31:18.104466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.083 [2024-07-15 08:31:18.104481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:26.083 [2024-07-15 08:31:18.109354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.083 [2024-07-15 08:31:18.109395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.083 [2024-07-15 08:31:18.109409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:26.083 [2024-07-15 08:31:18.114129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.083 [2024-07-15 08:31:18.114167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.083 [2024-07-15 08:31:18.114182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:26.083 [2024-07-15 08:31:18.119119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.083 [2024-07-15 08:31:18.119160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.083 [2024-07-15 08:31:18.119175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:26.084 [2024-07-15 08:31:18.124159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.084 [2024-07-15 08:31:18.124201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.084 [2024-07-15 08:31:18.124215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:26.084 [2024-07-15 08:31:18.129337] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.084 [2024-07-15 08:31:18.129378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.084 [2024-07-15 08:31:18.129392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:26.084 [2024-07-15 08:31:18.134540] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.084 [2024-07-15 08:31:18.134585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.084 [2024-07-15 08:31:18.134600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:26.084 [2024-07-15 08:31:18.139700] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.084 [2024-07-15 08:31:18.139754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.084 [2024-07-15 08:31:18.139774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:26.084 [2024-07-15 08:31:18.144651] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.084 [2024-07-15 08:31:18.144695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.084 [2024-07-15 08:31:18.144710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:26.084 [2024-07-15 08:31:18.149832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.084 [2024-07-15 08:31:18.149875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.084 [2024-07-15 08:31:18.149902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:26.084 [2024-07-15 08:31:18.155146] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.084 [2024-07-15 08:31:18.155191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.084 [2024-07-15 08:31:18.155206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:26.084 [2024-07-15 08:31:18.160480] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.084 [2024-07-15 08:31:18.160526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.084 [2024-07-15 08:31:18.160549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:26.084 [2024-07-15 08:31:18.165601] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.084 [2024-07-15 08:31:18.165645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.084 [2024-07-15 08:31:18.165660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:26.084 [2024-07-15 08:31:18.170698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.084 [2024-07-15 08:31:18.170754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.084 [2024-07-15 08:31:18.170770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:26.084 [2024-07-15 08:31:18.175948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.084 [2024-07-15 08:31:18.175989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.084 [2024-07-15 08:31:18.176004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:26.084 [2024-07-15 08:31:18.181119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.084 [2024-07-15 08:31:18.181170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.084 [2024-07-15 08:31:18.181185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:26.084 [2024-07-15 08:31:18.186314] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.084 [2024-07-15 08:31:18.186365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.084 [2024-07-15 08:31:18.186379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:26.084 [2024-07-15 08:31:18.191777] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.084 [2024-07-15 08:31:18.191835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.084 [2024-07-15 08:31:18.191852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:26.084 [2024-07-15 08:31:18.197054] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.084 [2024-07-15 08:31:18.197105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.084 [2024-07-15 08:31:18.197120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:26.084 [2024-07-15 08:31:18.202320] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.084 [2024-07-15 08:31:18.202374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.084 [2024-07-15 08:31:18.202389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:26.084 [2024-07-15 08:31:18.207572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.084 [2024-07-15 08:31:18.207611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.084 [2024-07-15 08:31:18.207625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:26.084 [2024-07-15 08:31:18.212819] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.084 [2024-07-15 08:31:18.212861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.084 [2024-07-15 08:31:18.212876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:26.084 [2024-07-15 08:31:18.218100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.084 [2024-07-15 08:31:18.218147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.084 [2024-07-15 08:31:18.218161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:26.084 [2024-07-15 08:31:18.223314] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.084 [2024-07-15 08:31:18.223356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.084 [2024-07-15 08:31:18.223370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:26.084 [2024-07-15 08:31:18.228496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.084 [2024-07-15 08:31:18.228552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.084 [2024-07-15 08:31:18.228576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:26.084 [2024-07-15 08:31:18.233761] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.084 [2024-07-15 08:31:18.233828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.084 [2024-07-15 08:31:18.233843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:26.084 [2024-07-15 08:31:18.238998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.084 [2024-07-15 08:31:18.239070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.084 [2024-07-15 08:31:18.239087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:26.084 [2024-07-15 08:31:18.244437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.084 [2024-07-15 08:31:18.244494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.084 [2024-07-15 08:31:18.244510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:26.084 [2024-07-15 08:31:18.249689] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.084 [2024-07-15 08:31:18.249766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.084 [2024-07-15 08:31:18.249781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:26.084 [2024-07-15 08:31:18.255059] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.084 [2024-07-15 08:31:18.255109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.084 [2024-07-15 08:31:18.255124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:26.344 [2024-07-15 08:31:18.260266] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.344 [2024-07-15 08:31:18.260307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.344 [2024-07-15 08:31:18.260322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:26.344 [2024-07-15 08:31:18.265448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.344 [2024-07-15 08:31:18.265488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.344 [2024-07-15 08:31:18.265502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:26.344 [2024-07-15 08:31:18.270778] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.344 [2024-07-15 08:31:18.270823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.344 [2024-07-15 08:31:18.270837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:26.344 [2024-07-15 08:31:18.276211] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.344 [2024-07-15 08:31:18.276251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.344 [2024-07-15 08:31:18.276265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:26.344 [2024-07-15 08:31:18.281445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.344 [2024-07-15 08:31:18.281484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.344 [2024-07-15 08:31:18.281498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:26.344 [2024-07-15 08:31:18.286613] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.344 [2024-07-15 08:31:18.286676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.344 [2024-07-15 08:31:18.286690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:26.344 [2024-07-15 08:31:18.291971] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.344 [2024-07-15 08:31:18.292013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.344 [2024-07-15 08:31:18.292028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:26.344 [2024-07-15 08:31:18.297029] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.344 [2024-07-15 08:31:18.297069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.344 [2024-07-15 08:31:18.297084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:26.344 [2024-07-15 08:31:18.302006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.344 [2024-07-15 08:31:18.302045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.344 [2024-07-15 08:31:18.302059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:26.344 [2024-07-15 08:31:18.307193] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.344 [2024-07-15 08:31:18.307233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.344 [2024-07-15 08:31:18.307247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:26.344 [2024-07-15 08:31:18.312248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.344 [2024-07-15 08:31:18.312286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.344 [2024-07-15 08:31:18.312300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:26.344 [2024-07-15 08:31:18.317419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.344 [2024-07-15 08:31:18.317499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.344 [2024-07-15 08:31:18.317516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:26.344 [2024-07-15 08:31:18.322712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.344 [2024-07-15 08:31:18.322782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.344 [2024-07-15 08:31:18.322796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:26.344 [2024-07-15 08:31:18.327792] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.344 [2024-07-15 08:31:18.327832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.344 [2024-07-15 08:31:18.327846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:26.344 [2024-07-15 08:31:18.332953] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.344 [2024-07-15 08:31:18.332992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.344 [2024-07-15 08:31:18.333006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:26.344 [2024-07-15 08:31:18.338132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.344 [2024-07-15 08:31:18.338172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.344 [2024-07-15 08:31:18.338186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:26.344 [2024-07-15 08:31:18.343300] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.344 [2024-07-15 08:31:18.343340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.344 [2024-07-15 08:31:18.343354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:26.344 [2024-07-15 08:31:18.348326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.344 [2024-07-15 08:31:18.348372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.344 [2024-07-15 08:31:18.348386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:26.344 [2024-07-15 08:31:18.353440] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.344 [2024-07-15 08:31:18.353492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.344 [2024-07-15 08:31:18.353507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:26.344 [2024-07-15 08:31:18.358531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.344 [2024-07-15 08:31:18.358585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.344 [2024-07-15 08:31:18.358600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:26.344 [2024-07-15 08:31:18.363686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.344 [2024-07-15 08:31:18.363770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.344 [2024-07-15 08:31:18.363786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:26.344 [2024-07-15 08:31:18.368826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.344 [2024-07-15 08:31:18.368877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.344 [2024-07-15 08:31:18.368893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:26.344 [2024-07-15 08:31:18.374047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.344 [2024-07-15 08:31:18.374087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.344 [2024-07-15 08:31:18.374102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:26.344 [2024-07-15 08:31:18.379187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.345 [2024-07-15 08:31:18.379228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.345 [2024-07-15 08:31:18.379242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:26.345 [2024-07-15 08:31:18.384773] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.345 [2024-07-15 08:31:18.384822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.345 [2024-07-15 08:31:18.384837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:26.345 [2024-07-15 08:31:18.389915] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.345 [2024-07-15 08:31:18.389982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.345 [2024-07-15 08:31:18.389998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:26.345 [2024-07-15 08:31:18.394964] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.345 [2024-07-15 08:31:18.395003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.345 [2024-07-15 08:31:18.395017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:26.345 [2024-07-15 08:31:18.399929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.345 [2024-07-15 08:31:18.399969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.345 [2024-07-15 08:31:18.399983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:26.345 [2024-07-15 08:31:18.404920] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.345 [2024-07-15 08:31:18.404961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.345 [2024-07-15 08:31:18.404974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:26.345 [2024-07-15 08:31:18.410208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.345 [2024-07-15 08:31:18.410249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.345 [2024-07-15 08:31:18.410264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:26.345 [2024-07-15 08:31:18.415518] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.345 [2024-07-15 08:31:18.415561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.345 [2024-07-15 08:31:18.415576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:26.345 [2024-07-15 08:31:18.420644] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.345 [2024-07-15 08:31:18.420684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.345 [2024-07-15 08:31:18.420699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:26.345 [2024-07-15 08:31:18.425808] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.345 [2024-07-15 08:31:18.425846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.345 [2024-07-15 08:31:18.425860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:26.345 [2024-07-15 08:31:18.430803] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.345 [2024-07-15 08:31:18.430841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.345 [2024-07-15 08:31:18.430856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:26.345 [2024-07-15 08:31:18.435862] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.345 [2024-07-15 08:31:18.435901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.345 [2024-07-15 08:31:18.435914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:26.345 [2024-07-15 08:31:18.440887] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.345 [2024-07-15 08:31:18.440925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.345 [2024-07-15 08:31:18.440938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:26.345 [2024-07-15 08:31:18.445805] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.345 [2024-07-15 08:31:18.445843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.345 [2024-07-15 08:31:18.445856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:26.345 [2024-07-15 08:31:18.450911] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.345 [2024-07-15 08:31:18.450953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.345 [2024-07-15 08:31:18.450967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:26.345 [2024-07-15 08:31:18.455785] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.345 [2024-07-15 08:31:18.455819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.345 [2024-07-15 08:31:18.455833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:26.345 [2024-07-15 08:31:18.460850] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.345 [2024-07-15 08:31:18.460888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.345 [2024-07-15 08:31:18.460902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:26.345 [2024-07-15 08:31:18.466023] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.345 [2024-07-15 08:31:18.466062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.345 [2024-07-15 08:31:18.466075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:26.345 [2024-07-15 08:31:18.471253] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.345 [2024-07-15 08:31:18.471307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.345 [2024-07-15 08:31:18.471322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:26.345 [2024-07-15 08:31:18.476448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.345 [2024-07-15 08:31:18.476487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.345 [2024-07-15 08:31:18.476501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:26.345 [2024-07-15 08:31:18.481566] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.345 [2024-07-15 08:31:18.481615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.345 [2024-07-15 08:31:18.481630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:26.345 [2024-07-15 08:31:18.486480] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.345 [2024-07-15 08:31:18.486519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.345 [2024-07-15 08:31:18.486533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:26.345 [2024-07-15 08:31:18.491536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.345 [2024-07-15 08:31:18.491577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.345 [2024-07-15 08:31:18.491591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:26.345 [2024-07-15 08:31:18.496654] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.345 [2024-07-15 08:31:18.496695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.345 [2024-07-15 08:31:18.496709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:26.345 [2024-07-15 08:31:18.502020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.345 [2024-07-15 08:31:18.502061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.345 [2024-07-15 08:31:18.502075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:26.345 [2024-07-15 08:31:18.507359] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.345 [2024-07-15 08:31:18.507401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.345 [2024-07-15 08:31:18.507416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:26.345 [2024-07-15 08:31:18.512428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.345 [2024-07-15 08:31:18.512467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.345 [2024-07-15 08:31:18.512481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:26.606 [2024-07-15 08:31:18.517866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.606 [2024-07-15 08:31:18.517924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.606 [2024-07-15 08:31:18.517939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:26.606 [2024-07-15 08:31:18.523185] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.606 [2024-07-15 08:31:18.523225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.606 [2024-07-15 08:31:18.523239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:26.606 [2024-07-15 08:31:18.528421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.606 [2024-07-15 08:31:18.528460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.606 [2024-07-15 08:31:18.528474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:26.606 [2024-07-15 08:31:18.533616] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.606 [2024-07-15 08:31:18.533656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.606 [2024-07-15 08:31:18.533671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:26.606 [2024-07-15 08:31:18.538648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.606 [2024-07-15 08:31:18.538687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.606 [2024-07-15 08:31:18.538701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:26.606 [2024-07-15 08:31:18.543853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.606 [2024-07-15 08:31:18.543890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.606 [2024-07-15 08:31:18.543911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:26.606 [2024-07-15 08:31:18.548797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.606 [2024-07-15 08:31:18.548835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.606 [2024-07-15 08:31:18.548849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:26.606 [2024-07-15 08:31:18.554029] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.607 [2024-07-15 08:31:18.554070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.607 [2024-07-15 08:31:18.554083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:26.607 [2024-07-15 08:31:18.559818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.607 [2024-07-15 08:31:18.559863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.607 [2024-07-15 08:31:18.559878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:26.607 [2024-07-15 08:31:18.564927] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.607 [2024-07-15 08:31:18.564968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.607 [2024-07-15 08:31:18.564983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:26.607 [2024-07-15 08:31:18.570056] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.607 [2024-07-15 08:31:18.570106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.607 [2024-07-15 08:31:18.570129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:26.607 [2024-07-15 08:31:18.575467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.607 [2024-07-15 08:31:18.575510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.607 [2024-07-15 08:31:18.575524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:26.607 [2024-07-15 08:31:18.580773] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.607 [2024-07-15 08:31:18.580834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.607 [2024-07-15 08:31:18.580853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:26.607 [2024-07-15 08:31:18.585864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.607 [2024-07-15 08:31:18.585926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.607 [2024-07-15 08:31:18.585949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:26.607 [2024-07-15 08:31:18.590826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.607 [2024-07-15 08:31:18.590868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.607 [2024-07-15 08:31:18.590883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:26.607 [2024-07-15 08:31:18.595826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.607 [2024-07-15 08:31:18.595869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.607 [2024-07-15 08:31:18.595893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:26.607 [2024-07-15 08:31:18.601123] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.607 [2024-07-15 08:31:18.601167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.607 [2024-07-15 08:31:18.601183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:26.607 [2024-07-15 08:31:18.606253] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.607 [2024-07-15 08:31:18.606297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.607 [2024-07-15 08:31:18.606312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:26.607 [2024-07-15 08:31:18.612345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.607 [2024-07-15 08:31:18.612411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.607 [2024-07-15 08:31:18.612434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:26.607 [2024-07-15 08:31:18.617660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.607 [2024-07-15 08:31:18.617740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.607 [2024-07-15 08:31:18.617757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:26.607 [2024-07-15 08:31:18.622811] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.607 [2024-07-15 08:31:18.622869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.607 [2024-07-15 08:31:18.622885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:26.607 [2024-07-15 08:31:18.628002] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.607 [2024-07-15 08:31:18.628062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.607 [2024-07-15 08:31:18.628079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:26.607 [2024-07-15 08:31:18.632916] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.607 [2024-07-15 08:31:18.632968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.607 [2024-07-15 08:31:18.632984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:26.607 [2024-07-15 08:31:18.637877] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.607 [2024-07-15 08:31:18.637939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.607 [2024-07-15 08:31:18.637970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:26.607 [2024-07-15 08:31:18.643106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.607 [2024-07-15 08:31:18.643160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.607 [2024-07-15 08:31:18.643176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:26.607 [2024-07-15 08:31:18.648161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.607 [2024-07-15 08:31:18.648206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.607 [2024-07-15 08:31:18.648222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:26.607 [2024-07-15 08:31:18.653167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.607 [2024-07-15 08:31:18.653208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.607 [2024-07-15 08:31:18.653222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:26.607 [2024-07-15 08:31:18.658026] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.607 [2024-07-15 08:31:18.658065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.607 [2024-07-15 08:31:18.658087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:26.607 [2024-07-15 08:31:18.663176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.607 [2024-07-15 08:31:18.663221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.607 [2024-07-15 08:31:18.663235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:26.607 [2024-07-15 08:31:18.668383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.607 [2024-07-15 08:31:18.668424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.607 [2024-07-15 08:31:18.668438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:26.607 [2024-07-15 08:31:18.673560] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.607 [2024-07-15 08:31:18.673600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.607 [2024-07-15 08:31:18.673614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:26.607 [2024-07-15 08:31:18.678627] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.607 [2024-07-15 08:31:18.678666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.607 [2024-07-15 08:31:18.678680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:26.607 [2024-07-15 08:31:18.683435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.607 [2024-07-15 08:31:18.683474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.607 [2024-07-15 08:31:18.683488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:26.607 [2024-07-15 08:31:18.688477] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.607 [2024-07-15 08:31:18.688517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.607 [2024-07-15 08:31:18.688531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:26.607 [2024-07-15 08:31:18.693499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.607 [2024-07-15 08:31:18.693539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.607 [2024-07-15 08:31:18.693553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:26.607 [2024-07-15 08:31:18.698409] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.607 [2024-07-15 08:31:18.698448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.607 [2024-07-15 08:31:18.698463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:26.608 [2024-07-15 08:31:18.703327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.608 [2024-07-15 08:31:18.703368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.608 [2024-07-15 08:31:18.703384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:26.608 [2024-07-15 08:31:18.708217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.608 [2024-07-15 08:31:18.708255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.608 [2024-07-15 08:31:18.708269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:26.608 [2024-07-15 08:31:18.713148] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.608 [2024-07-15 08:31:18.713187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.608 [2024-07-15 08:31:18.713201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:26.608 [2024-07-15 08:31:18.718131] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.608 [2024-07-15 08:31:18.718170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.608 [2024-07-15 08:31:18.718184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:26.608 [2024-07-15 08:31:18.723076] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.608 [2024-07-15 08:31:18.723114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.608 [2024-07-15 08:31:18.723128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:26.608 [2024-07-15 08:31:18.728049] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.608 [2024-07-15 08:31:18.728088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.608 [2024-07-15 08:31:18.728103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:26.608 [2024-07-15 08:31:18.732992] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.608 [2024-07-15 08:31:18.733030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.608 [2024-07-15 08:31:18.733044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:26.608 [2024-07-15 08:31:18.737896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.608 [2024-07-15 08:31:18.737937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.608 [2024-07-15 08:31:18.737952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:26.608 [2024-07-15 08:31:18.742942] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.608 [2024-07-15 08:31:18.742987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.608 [2024-07-15 08:31:18.743002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:26.608 [2024-07-15 08:31:18.748094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.608 [2024-07-15 08:31:18.748145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.608 [2024-07-15 08:31:18.748160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:26.608 [2024-07-15 08:31:18.753242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.608 [2024-07-15 08:31:18.753292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.608 [2024-07-15 08:31:18.753307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:26.608 [2024-07-15 08:31:18.758507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.608 [2024-07-15 08:31:18.758568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.608 [2024-07-15 08:31:18.758591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:26.608 [2024-07-15 08:31:18.763839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.608 [2024-07-15 08:31:18.763909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.608 [2024-07-15 08:31:18.763924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:26.608 [2024-07-15 08:31:18.768929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.608 [2024-07-15 08:31:18.768982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.608 [2024-07-15 08:31:18.768998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:26.608 [2024-07-15 08:31:18.773974] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.608 [2024-07-15 08:31:18.774023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.608 [2024-07-15 08:31:18.774038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:26.868 [2024-07-15 08:31:18.779145] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.868 [2024-07-15 08:31:18.779190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.868 [2024-07-15 08:31:18.779205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:26.868 [2024-07-15 08:31:18.784521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.868 [2024-07-15 08:31:18.784570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.868 [2024-07-15 08:31:18.784585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:26.868 [2024-07-15 08:31:18.789839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.868 [2024-07-15 08:31:18.789881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.868 [2024-07-15 08:31:18.789896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:26.868 [2024-07-15 08:31:18.795391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.868 [2024-07-15 08:31:18.795445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.868 [2024-07-15 08:31:18.795460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:26.868 [2024-07-15 08:31:18.801065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.868 [2024-07-15 08:31:18.801120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.868 [2024-07-15 08:31:18.801137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:26.868 [2024-07-15 08:31:18.806135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.868 [2024-07-15 08:31:18.806207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.868 [2024-07-15 08:31:18.806223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:26.868 [2024-07-15 08:31:18.811352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.868 [2024-07-15 08:31:18.811399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.868 [2024-07-15 08:31:18.811422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:26.868 [2024-07-15 08:31:18.816756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.868 [2024-07-15 08:31:18.816807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.868 [2024-07-15 08:31:18.816829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:26.868 [2024-07-15 08:31:18.822243] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.868 [2024-07-15 08:31:18.822286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.868 [2024-07-15 08:31:18.822301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:26.868 [2024-07-15 08:31:18.827516] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.868 [2024-07-15 08:31:18.827561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.868 [2024-07-15 08:31:18.827577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:26.868 [2024-07-15 08:31:18.832621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.868 [2024-07-15 08:31:18.832664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.868 [2024-07-15 08:31:18.832679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:26.868 [2024-07-15 08:31:18.837931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.868 [2024-07-15 08:31:18.837973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.868 [2024-07-15 08:31:18.837988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:26.868 [2024-07-15 08:31:18.843081] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.868 [2024-07-15 08:31:18.843124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.868 [2024-07-15 08:31:18.843139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:26.868 [2024-07-15 08:31:18.848172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.868 [2024-07-15 08:31:18.848211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.868 [2024-07-15 08:31:18.848227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:26.868 [2024-07-15 08:31:18.853119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.868 [2024-07-15 08:31:18.853158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.868 [2024-07-15 08:31:18.853171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:26.868 [2024-07-15 08:31:18.858071] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.868 [2024-07-15 08:31:18.858103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.868 [2024-07-15 08:31:18.858115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:26.868 [2024-07-15 08:31:18.862967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.868 [2024-07-15 08:31:18.863007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.868 [2024-07-15 08:31:18.863028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:26.868 [2024-07-15 08:31:18.867990] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.868 [2024-07-15 08:31:18.868030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.868 [2024-07-15 08:31:18.868044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:26.868 [2024-07-15 08:31:18.873027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.868 [2024-07-15 08:31:18.873078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.868 [2024-07-15 08:31:18.873093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:26.868 [2024-07-15 08:31:18.878198] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.868 [2024-07-15 08:31:18.878240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.868 [2024-07-15 08:31:18.878254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:26.868 [2024-07-15 08:31:18.883546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.869 [2024-07-15 08:31:18.883592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.869 [2024-07-15 08:31:18.883608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:26.869 [2024-07-15 08:31:18.888836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.869 [2024-07-15 08:31:18.888879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.869 [2024-07-15 08:31:18.888894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:26.869 [2024-07-15 08:31:18.894864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.869 [2024-07-15 08:31:18.894911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.869 [2024-07-15 08:31:18.894927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:26.869 [2024-07-15 08:31:18.900113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.869 [2024-07-15 08:31:18.900158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.869 [2024-07-15 08:31:18.900173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:26.869 [2024-07-15 08:31:18.905411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.869 [2024-07-15 08:31:18.905465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.869 [2024-07-15 08:31:18.905485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:26.869 [2024-07-15 08:31:18.910801] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.869 [2024-07-15 08:31:18.910844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.869 [2024-07-15 08:31:18.910860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:26.869 [2024-07-15 08:31:18.915924] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.869 [2024-07-15 08:31:18.915969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.869 [2024-07-15 08:31:18.915985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:26.869 [2024-07-15 08:31:18.920901] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.869 [2024-07-15 08:31:18.920943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.869 [2024-07-15 08:31:18.920958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:26.869 [2024-07-15 08:31:18.925805] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.869 [2024-07-15 08:31:18.925847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.869 [2024-07-15 08:31:18.925861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:26.869 [2024-07-15 08:31:18.930980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.869 [2024-07-15 08:31:18.931022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.869 [2024-07-15 08:31:18.931037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:26.869 [2024-07-15 08:31:18.935823] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.869 [2024-07-15 08:31:18.935863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.869 [2024-07-15 08:31:18.935878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:26.869 [2024-07-15 08:31:18.940673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.869 [2024-07-15 08:31:18.940713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.869 [2024-07-15 08:31:18.940742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:26.869 [2024-07-15 08:31:18.945478] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.869 [2024-07-15 08:31:18.945516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.869 [2024-07-15 08:31:18.945530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:26.869 [2024-07-15 08:31:18.950279] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.869 [2024-07-15 08:31:18.950318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.869 [2024-07-15 08:31:18.950332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:26.869 [2024-07-15 08:31:18.955087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.869 [2024-07-15 08:31:18.955127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.869 [2024-07-15 08:31:18.955141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:26.869 [2024-07-15 08:31:18.959880] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.869 [2024-07-15 08:31:18.959918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.869 [2024-07-15 08:31:18.959932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:26.869 [2024-07-15 08:31:18.964694] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.869 [2024-07-15 08:31:18.964747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.869 [2024-07-15 08:31:18.964762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:26.869 [2024-07-15 08:31:18.970005] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.869 [2024-07-15 08:31:18.970050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.869 [2024-07-15 08:31:18.970065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:26.869 [2024-07-15 08:31:18.974971] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.869 [2024-07-15 08:31:18.975014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.869 [2024-07-15 08:31:18.975029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:26.869 [2024-07-15 08:31:18.979980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.869 [2024-07-15 08:31:18.980030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.869 [2024-07-15 08:31:18.980046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:26.869 [2024-07-15 08:31:18.984908] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.869 [2024-07-15 08:31:18.984959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.869 [2024-07-15 08:31:18.984975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:26.869 [2024-07-15 08:31:18.989826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.869 [2024-07-15 08:31:18.989876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.869 [2024-07-15 08:31:18.989892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:26.869 [2024-07-15 08:31:18.994781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.869 [2024-07-15 08:31:18.994820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.869 [2024-07-15 08:31:18.994834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:26.869 [2024-07-15 08:31:18.999629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.869 [2024-07-15 08:31:18.999675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.869 [2024-07-15 08:31:18.999690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:26.869 [2024-07-15 08:31:19.004687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.869 [2024-07-15 08:31:19.004746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.869 [2024-07-15 08:31:19.004763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:26.870 [2024-07-15 08:31:19.009603] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.870 [2024-07-15 08:31:19.009650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.870 [2024-07-15 08:31:19.009664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:26.870 [2024-07-15 08:31:19.015459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.870 [2024-07-15 08:31:19.015500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.870 [2024-07-15 08:31:19.015515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:26.870 [2024-07-15 08:31:19.020940] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.870 [2024-07-15 08:31:19.020996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.870 [2024-07-15 08:31:19.021011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:26.870 [2024-07-15 08:31:19.025898] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.870 [2024-07-15 08:31:19.025938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.870 [2024-07-15 08:31:19.025953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:26.870 [2024-07-15 08:31:19.031527] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.870 [2024-07-15 08:31:19.031573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.870 [2024-07-15 08:31:19.031589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:26.870 [2024-07-15 08:31:19.036418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.870 [2024-07-15 08:31:19.036467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.870 [2024-07-15 08:31:19.036483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:26.870 [2024-07-15 08:31:19.041493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:26.870 [2024-07-15 08:31:19.041544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.870 [2024-07-15 08:31:19.041561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.129 [2024-07-15 08:31:19.046463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.129 [2024-07-15 08:31:19.046523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.129 [2024-07-15 08:31:19.046545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.129 [2024-07-15 08:31:19.051430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.129 [2024-07-15 08:31:19.051479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.129 [2024-07-15 08:31:19.051496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.129 [2024-07-15 08:31:19.056287] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.129 [2024-07-15 08:31:19.056334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.129 [2024-07-15 08:31:19.056349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.129 [2024-07-15 08:31:19.061124] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.129 [2024-07-15 08:31:19.061164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.129 [2024-07-15 08:31:19.061178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.129 [2024-07-15 08:31:19.065987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.130 [2024-07-15 08:31:19.066026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.130 [2024-07-15 08:31:19.066041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.130 [2024-07-15 08:31:19.070780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.130 [2024-07-15 08:31:19.070819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.130 [2024-07-15 08:31:19.070833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.130 [2024-07-15 08:31:19.075611] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.130 [2024-07-15 08:31:19.075650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.130 [2024-07-15 08:31:19.075664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.130 [2024-07-15 08:31:19.080497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.130 [2024-07-15 08:31:19.080536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.130 [2024-07-15 08:31:19.080550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.130 [2024-07-15 08:31:19.085313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.130 [2024-07-15 08:31:19.085352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.130 [2024-07-15 08:31:19.085367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.130 [2024-07-15 08:31:19.090122] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.130 [2024-07-15 08:31:19.090161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.130 [2024-07-15 08:31:19.090176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.130 [2024-07-15 08:31:19.095062] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.130 [2024-07-15 08:31:19.095102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.130 [2024-07-15 08:31:19.095116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.130 [2024-07-15 08:31:19.100082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.130 [2024-07-15 08:31:19.100122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.130 [2024-07-15 08:31:19.100137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.130 [2024-07-15 08:31:19.105067] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.130 [2024-07-15 08:31:19.105106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.130 [2024-07-15 08:31:19.105120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.130 [2024-07-15 08:31:19.109923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.130 [2024-07-15 08:31:19.109963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.130 [2024-07-15 08:31:19.109976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.130 [2024-07-15 08:31:19.114730] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.130 [2024-07-15 08:31:19.114768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.130 [2024-07-15 08:31:19.114782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.130 [2024-07-15 08:31:19.119653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.130 [2024-07-15 08:31:19.119693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.130 [2024-07-15 08:31:19.119708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.130 [2024-07-15 08:31:19.124513] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.130 [2024-07-15 08:31:19.124552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.130 [2024-07-15 08:31:19.124566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.130 [2024-07-15 08:31:19.129316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.130 [2024-07-15 08:31:19.129354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.130 [2024-07-15 08:31:19.129368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.130 [2024-07-15 08:31:19.134141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.130 [2024-07-15 08:31:19.134180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.130 [2024-07-15 08:31:19.134194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.130 [2024-07-15 08:31:19.139003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.130 [2024-07-15 08:31:19.139042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.130 [2024-07-15 08:31:19.139056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.130 [2024-07-15 08:31:19.143868] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.130 [2024-07-15 08:31:19.143906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.130 [2024-07-15 08:31:19.143920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.130 [2024-07-15 08:31:19.148775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.130 [2024-07-15 08:31:19.148823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.130 [2024-07-15 08:31:19.148838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.130 [2024-07-15 08:31:19.153547] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.130 [2024-07-15 08:31:19.153601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.130 [2024-07-15 08:31:19.153616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.130 [2024-07-15 08:31:19.158538] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.130 [2024-07-15 08:31:19.158592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.130 [2024-07-15 08:31:19.158607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.130 [2024-07-15 08:31:19.163435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.130 [2024-07-15 08:31:19.163486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.130 [2024-07-15 08:31:19.163502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.130 [2024-07-15 08:31:19.168308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.130 [2024-07-15 08:31:19.168357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.130 [2024-07-15 08:31:19.168373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.130 [2024-07-15 08:31:19.173137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.130 [2024-07-15 08:31:19.173183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.130 [2024-07-15 08:31:19.173198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.130 [2024-07-15 08:31:19.178007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.130 [2024-07-15 08:31:19.178046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.130 [2024-07-15 08:31:19.178061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.130 [2024-07-15 08:31:19.182941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.130 [2024-07-15 08:31:19.182981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.130 [2024-07-15 08:31:19.182995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.130 [2024-07-15 08:31:19.187904] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.130 [2024-07-15 08:31:19.187953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.130 [2024-07-15 08:31:19.187968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.130 [2024-07-15 08:31:19.192869] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.130 [2024-07-15 08:31:19.192908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.130 [2024-07-15 08:31:19.192923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.130 [2024-07-15 08:31:19.197669] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.130 [2024-07-15 08:31:19.197707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.130 [2024-07-15 08:31:19.197737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.130 [2024-07-15 08:31:19.202518] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.130 [2024-07-15 08:31:19.202557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.130 [2024-07-15 08:31:19.202591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.130 [2024-07-15 08:31:19.207500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.130 [2024-07-15 08:31:19.207540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.130 [2024-07-15 08:31:19.207554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.130 [2024-07-15 08:31:19.212443] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.130 [2024-07-15 08:31:19.212483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.130 [2024-07-15 08:31:19.212498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.130 [2024-07-15 08:31:19.217234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.130 [2024-07-15 08:31:19.217273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.130 [2024-07-15 08:31:19.217287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.130 [2024-07-15 08:31:19.222046] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.130 [2024-07-15 08:31:19.222111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.130 [2024-07-15 08:31:19.222126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.130 [2024-07-15 08:31:19.226953] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.130 [2024-07-15 08:31:19.226992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.130 [2024-07-15 08:31:19.227006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.130 [2024-07-15 08:31:19.231842] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.130 [2024-07-15 08:31:19.231880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.130 [2024-07-15 08:31:19.231893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.130 [2024-07-15 08:31:19.236783] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.130 [2024-07-15 08:31:19.236820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.130 [2024-07-15 08:31:19.236833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.130 [2024-07-15 08:31:19.241657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.130 [2024-07-15 08:31:19.241697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.130 [2024-07-15 08:31:19.241711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.130 [2024-07-15 08:31:19.246476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.130 [2024-07-15 08:31:19.246521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.130 [2024-07-15 08:31:19.246535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.130 [2024-07-15 08:31:19.251354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.130 [2024-07-15 08:31:19.251392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.130 [2024-07-15 08:31:19.251406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.130 [2024-07-15 08:31:19.256160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.130 [2024-07-15 08:31:19.256198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.130 [2024-07-15 08:31:19.256212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.130 [2024-07-15 08:31:19.261028] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.130 [2024-07-15 08:31:19.261067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.130 [2024-07-15 08:31:19.261082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.130 [2024-07-15 08:31:19.265831] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.130 [2024-07-15 08:31:19.265869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.130 [2024-07-15 08:31:19.265883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.130 [2024-07-15 08:31:19.270659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.130 [2024-07-15 08:31:19.270701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.130 [2024-07-15 08:31:19.270716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.130 [2024-07-15 08:31:19.275640] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.130 [2024-07-15 08:31:19.275681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.130 [2024-07-15 08:31:19.275695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.130 [2024-07-15 08:31:19.280595] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.130 [2024-07-15 08:31:19.280650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.130 [2024-07-15 08:31:19.280665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.130 [2024-07-15 08:31:19.285531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.130 [2024-07-15 08:31:19.285571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.131 [2024-07-15 08:31:19.285586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.131 [2024-07-15 08:31:19.290449] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.131 [2024-07-15 08:31:19.290490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.131 [2024-07-15 08:31:19.290505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.131 [2024-07-15 08:31:19.295385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.131 [2024-07-15 08:31:19.295424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.131 [2024-07-15 08:31:19.295438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.131 [2024-07-15 08:31:19.300400] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.131 [2024-07-15 08:31:19.300440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.131 [2024-07-15 08:31:19.300454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.390 [2024-07-15 08:31:19.305284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.390 [2024-07-15 08:31:19.305323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.390 [2024-07-15 08:31:19.305338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.391 [2024-07-15 08:31:19.310221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.391 [2024-07-15 08:31:19.310262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.391 [2024-07-15 08:31:19.310276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.391 [2024-07-15 08:31:19.315106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.391 [2024-07-15 08:31:19.315145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.391 [2024-07-15 08:31:19.315160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.391 [2024-07-15 08:31:19.320026] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.391 [2024-07-15 08:31:19.320064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.391 [2024-07-15 08:31:19.320079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.391 [2024-07-15 08:31:19.324947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.391 [2024-07-15 08:31:19.324990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.391 [2024-07-15 08:31:19.325005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.391 [2024-07-15 08:31:19.330050] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.391 [2024-07-15 08:31:19.330089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.391 [2024-07-15 08:31:19.330104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.391 [2024-07-15 08:31:19.335584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.391 [2024-07-15 08:31:19.335623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.391 [2024-07-15 08:31:19.335637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.391 [2024-07-15 08:31:19.340445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.391 [2024-07-15 08:31:19.340484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.391 [2024-07-15 08:31:19.340498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.391 [2024-07-15 08:31:19.345336] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.391 [2024-07-15 08:31:19.345375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.391 [2024-07-15 08:31:19.345389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.391 [2024-07-15 08:31:19.350142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.391 [2024-07-15 08:31:19.350181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.391 [2024-07-15 08:31:19.350195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.391 [2024-07-15 08:31:19.355488] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.391 [2024-07-15 08:31:19.355534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.391 [2024-07-15 08:31:19.355549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.391 [2024-07-15 08:31:19.360749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.391 [2024-07-15 08:31:19.360789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.391 [2024-07-15 08:31:19.360803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.391 [2024-07-15 08:31:19.365948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.391 [2024-07-15 08:31:19.365993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.391 [2024-07-15 08:31:19.366009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.391 [2024-07-15 08:31:19.371080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.391 [2024-07-15 08:31:19.371124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.391 [2024-07-15 08:31:19.371139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.391 [2024-07-15 08:31:19.376138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.391 [2024-07-15 08:31:19.376183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.391 [2024-07-15 08:31:19.376198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.391 [2024-07-15 08:31:19.381192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.391 [2024-07-15 08:31:19.381236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.391 [2024-07-15 08:31:19.381252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.391 [2024-07-15 08:31:19.386220] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.391 [2024-07-15 08:31:19.386265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.391 [2024-07-15 08:31:19.386281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.391 [2024-07-15 08:31:19.391558] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.391 [2024-07-15 08:31:19.391619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.391 [2024-07-15 08:31:19.391642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.391 [2024-07-15 08:31:19.396896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.391 [2024-07-15 08:31:19.396941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.391 [2024-07-15 08:31:19.396957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.391 [2024-07-15 08:31:19.402049] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.391 [2024-07-15 08:31:19.402095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.391 [2024-07-15 08:31:19.402111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.391 [2024-07-15 08:31:19.406983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.391 [2024-07-15 08:31:19.407026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.391 [2024-07-15 08:31:19.407040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.391 [2024-07-15 08:31:19.411916] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.391 [2024-07-15 08:31:19.411959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.391 [2024-07-15 08:31:19.411974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.391 [2024-07-15 08:31:19.416937] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.391 [2024-07-15 08:31:19.416980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.391 [2024-07-15 08:31:19.416994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.391 [2024-07-15 08:31:19.421939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.391 [2024-07-15 08:31:19.421983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.391 [2024-07-15 08:31:19.421998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.391 [2024-07-15 08:31:19.426763] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.391 [2024-07-15 08:31:19.426805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.391 [2024-07-15 08:31:19.426819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.391 [2024-07-15 08:31:19.431701] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.391 [2024-07-15 08:31:19.431759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.391 [2024-07-15 08:31:19.431774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.391 [2024-07-15 08:31:19.436566] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.391 [2024-07-15 08:31:19.436607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.391 [2024-07-15 08:31:19.436622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.391 [2024-07-15 08:31:19.441630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.391 [2024-07-15 08:31:19.441672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.391 [2024-07-15 08:31:19.441686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.391 [2024-07-15 08:31:19.446757] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.391 [2024-07-15 08:31:19.446802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.391 [2024-07-15 08:31:19.446817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.391 [2024-07-15 08:31:19.451634] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.391 [2024-07-15 08:31:19.451674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.391 [2024-07-15 08:31:19.451688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.392 [2024-07-15 08:31:19.456467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.392 [2024-07-15 08:31:19.456507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.392 [2024-07-15 08:31:19.456521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.392 [2024-07-15 08:31:19.461313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.392 [2024-07-15 08:31:19.461353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.392 [2024-07-15 08:31:19.461367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.392 [2024-07-15 08:31:19.466169] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.392 [2024-07-15 08:31:19.466207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.392 [2024-07-15 08:31:19.466222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.392 [2024-07-15 08:31:19.471025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.392 [2024-07-15 08:31:19.471064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.392 [2024-07-15 08:31:19.471079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.392 [2024-07-15 08:31:19.475889] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.392 [2024-07-15 08:31:19.475927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.392 [2024-07-15 08:31:19.475941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.392 [2024-07-15 08:31:19.480741] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.392 [2024-07-15 08:31:19.480774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.392 [2024-07-15 08:31:19.480787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.392 [2024-07-15 08:31:19.485620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.392 [2024-07-15 08:31:19.485658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.392 [2024-07-15 08:31:19.485672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.392 [2024-07-15 08:31:19.490429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.392 [2024-07-15 08:31:19.490470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.392 [2024-07-15 08:31:19.490485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.392 [2024-07-15 08:31:19.495371] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.392 [2024-07-15 08:31:19.495410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.392 [2024-07-15 08:31:19.495423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.392 [2024-07-15 08:31:19.500181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.392 [2024-07-15 08:31:19.500220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.392 [2024-07-15 08:31:19.500234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.392 [2024-07-15 08:31:19.505044] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.392 [2024-07-15 08:31:19.505083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.392 [2024-07-15 08:31:19.505098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.392 [2024-07-15 08:31:19.509933] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.392 [2024-07-15 08:31:19.509971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.392 [2024-07-15 08:31:19.509985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.392 [2024-07-15 08:31:19.514805] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.392 [2024-07-15 08:31:19.514843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.392 [2024-07-15 08:31:19.514858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.392 [2024-07-15 08:31:19.519610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.392 [2024-07-15 08:31:19.519649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.392 [2024-07-15 08:31:19.519663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.392 [2024-07-15 08:31:19.524506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.392 [2024-07-15 08:31:19.524545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.392 [2024-07-15 08:31:19.524559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.392 [2024-07-15 08:31:19.529373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.392 [2024-07-15 08:31:19.529412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.392 [2024-07-15 08:31:19.529426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.392 [2024-07-15 08:31:19.534792] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.392 [2024-07-15 08:31:19.534838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.392 [2024-07-15 08:31:19.534854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.392 [2024-07-15 08:31:19.539950] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.392 [2024-07-15 08:31:19.539991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.392 [2024-07-15 08:31:19.540007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.392 [2024-07-15 08:31:19.544916] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.392 [2024-07-15 08:31:19.544962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.392 [2024-07-15 08:31:19.544978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.392 [2024-07-15 08:31:19.549847] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.392 [2024-07-15 08:31:19.549894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.392 [2024-07-15 08:31:19.549909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.392 [2024-07-15 08:31:19.554860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.392 [2024-07-15 08:31:19.554910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.392 [2024-07-15 08:31:19.554926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.392 [2024-07-15 08:31:19.559746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.392 [2024-07-15 08:31:19.559796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.392 [2024-07-15 08:31:19.559812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.652 [2024-07-15 08:31:19.564614] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.652 [2024-07-15 08:31:19.564668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.652 [2024-07-15 08:31:19.564684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.652 [2024-07-15 08:31:19.569517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.652 [2024-07-15 08:31:19.569563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.652 [2024-07-15 08:31:19.569579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.652 [2024-07-15 08:31:19.574815] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.652 [2024-07-15 08:31:19.574874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.652 [2024-07-15 08:31:19.574890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.652 [2024-07-15 08:31:19.581294] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.652 [2024-07-15 08:31:19.581363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.652 [2024-07-15 08:31:19.581393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.652 [2024-07-15 08:31:19.586908] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.652 [2024-07-15 08:31:19.586953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.652 [2024-07-15 08:31:19.586969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.652 [2024-07-15 08:31:19.592044] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.652 [2024-07-15 08:31:19.592087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.652 [2024-07-15 08:31:19.592108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.652 [2024-07-15 08:31:19.596887] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.652 [2024-07-15 08:31:19.596927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.652 [2024-07-15 08:31:19.596942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.652 [2024-07-15 08:31:19.601709] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.652 [2024-07-15 08:31:19.601760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.652 [2024-07-15 08:31:19.601774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.652 [2024-07-15 08:31:19.606529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.652 [2024-07-15 08:31:19.606569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.652 [2024-07-15 08:31:19.606583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.652 [2024-07-15 08:31:19.611466] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.652 [2024-07-15 08:31:19.611506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.652 [2024-07-15 08:31:19.611521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.652 [2024-07-15 08:31:19.616597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.652 [2024-07-15 08:31:19.616640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.652 [2024-07-15 08:31:19.616656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.652 [2024-07-15 08:31:19.621634] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.652 [2024-07-15 08:31:19.621682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.652 [2024-07-15 08:31:19.621704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.652 [2024-07-15 08:31:19.626552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.652 [2024-07-15 08:31:19.626596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.652 [2024-07-15 08:31:19.626612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.652 [2024-07-15 08:31:19.631473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.652 [2024-07-15 08:31:19.631514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.652 [2024-07-15 08:31:19.631529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.652 [2024-07-15 08:31:19.636354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.652 [2024-07-15 08:31:19.636395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.653 [2024-07-15 08:31:19.636410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.653 [2024-07-15 08:31:19.641319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.653 [2024-07-15 08:31:19.641362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.653 [2024-07-15 08:31:19.641377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.653 [2024-07-15 08:31:19.646224] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.653 [2024-07-15 08:31:19.646266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.653 [2024-07-15 08:31:19.646281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.653 [2024-07-15 08:31:19.651080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.653 [2024-07-15 08:31:19.651118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.653 [2024-07-15 08:31:19.651133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.653 [2024-07-15 08:31:19.655961] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.653 [2024-07-15 08:31:19.656002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.653 [2024-07-15 08:31:19.656016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.653 [2024-07-15 08:31:19.660869] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.653 [2024-07-15 08:31:19.660908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.653 [2024-07-15 08:31:19.660922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.653 [2024-07-15 08:31:19.665750] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.653 [2024-07-15 08:31:19.665789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.653 [2024-07-15 08:31:19.665803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.653 [2024-07-15 08:31:19.670647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.653 [2024-07-15 08:31:19.670686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.653 [2024-07-15 08:31:19.670700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.653 [2024-07-15 08:31:19.675586] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.653 [2024-07-15 08:31:19.675637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.653 [2024-07-15 08:31:19.675651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.653 [2024-07-15 08:31:19.680553] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.653 [2024-07-15 08:31:19.680594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.653 [2024-07-15 08:31:19.680607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.653 [2024-07-15 08:31:19.685414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.653 [2024-07-15 08:31:19.685453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.653 [2024-07-15 08:31:19.685467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.653 [2024-07-15 08:31:19.690283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.653 [2024-07-15 08:31:19.690325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.653 [2024-07-15 08:31:19.690339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.653 [2024-07-15 08:31:19.695290] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.653 [2024-07-15 08:31:19.695335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.653 [2024-07-15 08:31:19.695349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.653 [2024-07-15 08:31:19.700348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.653 [2024-07-15 08:31:19.700388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.653 [2024-07-15 08:31:19.700402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.653 [2024-07-15 08:31:19.705328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.653 [2024-07-15 08:31:19.705369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.653 [2024-07-15 08:31:19.705384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.653 [2024-07-15 08:31:19.710251] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.653 [2024-07-15 08:31:19.710292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.653 [2024-07-15 08:31:19.710306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.653 [2024-07-15 08:31:19.715168] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.653 [2024-07-15 08:31:19.715209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.653 [2024-07-15 08:31:19.715223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.653 [2024-07-15 08:31:19.720087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.653 [2024-07-15 08:31:19.720125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.653 [2024-07-15 08:31:19.720139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.653 [2024-07-15 08:31:19.724914] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.653 [2024-07-15 08:31:19.724953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.653 [2024-07-15 08:31:19.724966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.653 [2024-07-15 08:31:19.729868] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.653 [2024-07-15 08:31:19.729906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.653 [2024-07-15 08:31:19.729919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.653 [2024-07-15 08:31:19.734893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.653 [2024-07-15 08:31:19.734932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.653 [2024-07-15 08:31:19.734946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.653 [2024-07-15 08:31:19.739787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.653 [2024-07-15 08:31:19.739825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.653 [2024-07-15 08:31:19.739839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.653 [2024-07-15 08:31:19.744750] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.653 [2024-07-15 08:31:19.744783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.653 [2024-07-15 08:31:19.744796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.653 [2024-07-15 08:31:19.749639] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.653 [2024-07-15 08:31:19.749679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.653 [2024-07-15 08:31:19.749693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.653 [2024-07-15 08:31:19.754540] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.653 [2024-07-15 08:31:19.754579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.653 [2024-07-15 08:31:19.754592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.653 [2024-07-15 08:31:19.759406] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.653 [2024-07-15 08:31:19.759445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.653 [2024-07-15 08:31:19.759459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.653 [2024-07-15 08:31:19.764277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.653 [2024-07-15 08:31:19.764315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.653 [2024-07-15 08:31:19.764328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.653 [2024-07-15 08:31:19.769138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.653 [2024-07-15 08:31:19.769177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.653 [2024-07-15 08:31:19.769190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.653 [2024-07-15 08:31:19.774004] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.653 [2024-07-15 08:31:19.774043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.653 [2024-07-15 08:31:19.774057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.653 [2024-07-15 08:31:19.778867] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.653 [2024-07-15 08:31:19.778906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.654 [2024-07-15 08:31:19.778920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.654 [2024-07-15 08:31:19.783782] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.654 [2024-07-15 08:31:19.783820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.654 [2024-07-15 08:31:19.783834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.654 [2024-07-15 08:31:19.788657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.654 [2024-07-15 08:31:19.788695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.654 [2024-07-15 08:31:19.788709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.654 [2024-07-15 08:31:19.793595] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.654 [2024-07-15 08:31:19.793636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.654 [2024-07-15 08:31:19.793650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.654 [2024-07-15 08:31:19.798656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.654 [2024-07-15 08:31:19.798696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.654 [2024-07-15 08:31:19.798710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.654 [2024-07-15 08:31:19.803646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.654 [2024-07-15 08:31:19.803685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.654 [2024-07-15 08:31:19.803699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.654 [2024-07-15 08:31:19.808564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.654 [2024-07-15 08:31:19.808606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.654 [2024-07-15 08:31:19.808624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.654 [2024-07-15 08:31:19.813527] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.654 [2024-07-15 08:31:19.813569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.654 [2024-07-15 08:31:19.813584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.654 [2024-07-15 08:31:19.818433] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.654 [2024-07-15 08:31:19.818474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.654 [2024-07-15 08:31:19.818489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.654 [2024-07-15 08:31:19.823446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.654 [2024-07-15 08:31:19.823487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.654 [2024-07-15 08:31:19.823501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.912 [2024-07-15 08:31:19.828543] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.912 [2024-07-15 08:31:19.828584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.912 [2024-07-15 08:31:19.828599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.912 [2024-07-15 08:31:19.833493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.912 [2024-07-15 08:31:19.833534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.912 [2024-07-15 08:31:19.833549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.912 [2024-07-15 08:31:19.838692] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.912 [2024-07-15 08:31:19.838772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.912 [2024-07-15 08:31:19.838789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.912 [2024-07-15 08:31:19.844015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.912 [2024-07-15 08:31:19.844069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.912 [2024-07-15 08:31:19.844085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.912 [2024-07-15 08:31:19.849223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.913 [2024-07-15 08:31:19.849268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.913 [2024-07-15 08:31:19.849283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.913 [2024-07-15 08:31:19.854112] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf9ac0) 00:18:27.913 [2024-07-15 08:31:19.854153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.913 [2024-07-15 08:31:19.854168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.913 00:18:27.913 Latency(us) 00:18:27.913 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:27.913 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:18:27.913 nvme0n1 : 2.00 6092.62 761.58 0.00 0.00 2622.58 2308.65 11260.28 00:18:27.913 =================================================================================================================== 00:18:27.913 Total : 6092.62 761.58 0.00 0.00 2622.58 2308.65 11260.28 00:18:27.913 0 00:18:27.913 08:31:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:27.913 08:31:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:27.913 08:31:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:27.913 08:31:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:27.913 | .driver_specific 00:18:27.913 | .nvme_error 00:18:27.913 | .status_code 00:18:27.913 | .command_transient_transport_error' 00:18:28.171 08:31:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 393 > 0 )) 00:18:28.171 08:31:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80711 00:18:28.171 08:31:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 80711 ']' 00:18:28.171 08:31:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 80711 00:18:28.171 08:31:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:18:28.171 08:31:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:28.171 08:31:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80711 00:18:28.171 killing process with pid 80711 00:18:28.171 Received shutdown signal, test time was about 2.000000 seconds 00:18:28.171 00:18:28.171 Latency(us) 00:18:28.171 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:28.171 =================================================================================================================== 00:18:28.171 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:28.171 08:31:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:28.171 08:31:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:28.171 08:31:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80711' 00:18:28.171 08:31:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 80711 00:18:28.171 08:31:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 80711 00:18:28.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:28.428 08:31:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:18:28.428 08:31:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:28.428 08:31:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:18:28.428 08:31:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:18:28.428 08:31:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:18:28.428 08:31:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80771 00:18:28.428 08:31:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80771 /var/tmp/bperf.sock 00:18:28.428 08:31:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:18:28.428 08:31:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 80771 ']' 00:18:28.428 08:31:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:28.428 08:31:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:28.428 08:31:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:28.428 08:31:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:28.428 08:31:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:28.428 [2024-07-15 08:31:20.599682] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:28.428 [2024-07-15 08:31:20.600072] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80771 ] 00:18:28.683 [2024-07-15 08:31:20.736954] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:28.941 [2024-07-15 08:31:20.884969] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:28.941 [2024-07-15 08:31:20.957768] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:29.507 08:31:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:29.507 08:31:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:18:29.507 08:31:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:29.507 08:31:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:29.764 08:31:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:29.764 08:31:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.764 08:31:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:29.764 08:31:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.764 08:31:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:29.764 08:31:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:30.023 nvme0n1 00:18:30.023 08:31:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:18:30.023 08:31:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.023 08:31:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:30.023 08:31:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.023 08:31:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:30.023 08:31:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:30.282 Running I/O for 2 seconds... 00:18:30.282 [2024-07-15 08:31:22.357241] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190fef90 00:18:30.282 [2024-07-15 08:31:22.359953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.282 [2024-07-15 08:31:22.360016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.282 [2024-07-15 08:31:22.373878] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190feb58 00:18:30.282 [2024-07-15 08:31:22.376511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.282 [2024-07-15 08:31:22.376571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:30.282 [2024-07-15 08:31:22.390508] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190fe2e8 00:18:30.282 [2024-07-15 08:31:22.393130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.282 [2024-07-15 08:31:22.393184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:30.282 [2024-07-15 08:31:22.407018] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190fda78 00:18:30.282 [2024-07-15 08:31:22.409588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.282 [2024-07-15 08:31:22.409641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:30.282 [2024-07-15 08:31:22.423489] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190fd208 00:18:30.282 [2024-07-15 08:31:22.426057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.282 [2024-07-15 08:31:22.426108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:30.282 [2024-07-15 08:31:22.440027] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190fc998 00:18:30.282 [2024-07-15 08:31:22.442548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:19670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.282 [2024-07-15 08:31:22.442607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:30.282 [2024-07-15 08:31:22.456656] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190fc128 00:18:30.542 [2024-07-15 08:31:22.459173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.542 [2024-07-15 08:31:22.459227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:30.542 [2024-07-15 08:31:22.473995] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190fb8b8 00:18:30.542 [2024-07-15 08:31:22.476481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.542 [2024-07-15 08:31:22.476544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:30.542 [2024-07-15 08:31:22.490598] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190fb048 00:18:30.542 [2024-07-15 08:31:22.493085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:23573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.542 [2024-07-15 08:31:22.493136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:30.542 [2024-07-15 08:31:22.507168] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190fa7d8 00:18:30.542 [2024-07-15 08:31:22.509668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:15173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.542 [2024-07-15 08:31:22.509743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:30.542 [2024-07-15 08:31:22.524145] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190f9f68 00:18:30.542 [2024-07-15 08:31:22.526584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:6782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.542 [2024-07-15 08:31:22.526656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:30.542 [2024-07-15 08:31:22.541248] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190f96f8 00:18:30.542 [2024-07-15 08:31:22.543649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:9560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.542 [2024-07-15 08:31:22.543709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:30.542 [2024-07-15 08:31:22.557806] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190f8e88 00:18:30.542 [2024-07-15 08:31:22.560184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:9021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.542 [2024-07-15 08:31:22.560248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:30.542 [2024-07-15 08:31:22.574513] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190f8618 00:18:30.542 [2024-07-15 08:31:22.576890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:13361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.542 [2024-07-15 08:31:22.576948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:30.542 [2024-07-15 08:31:22.591046] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190f7da8 00:18:30.542 [2024-07-15 08:31:22.593397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:10272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.542 [2024-07-15 08:31:22.593440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:30.542 [2024-07-15 08:31:22.607513] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190f7538 00:18:30.542 [2024-07-15 08:31:22.609824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:17439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.542 [2024-07-15 08:31:22.609871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:30.542 [2024-07-15 08:31:22.623933] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190f6cc8 00:18:30.542 [2024-07-15 08:31:22.626245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:7158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.542 [2024-07-15 08:31:22.626292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:30.542 [2024-07-15 08:31:22.640369] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190f6458 00:18:30.542 [2024-07-15 08:31:22.642635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:3408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.542 [2024-07-15 08:31:22.642685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:30.542 [2024-07-15 08:31:22.656730] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190f5be8 00:18:30.542 [2024-07-15 08:31:22.658963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:13571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.542 [2024-07-15 08:31:22.659011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:30.542 [2024-07-15 08:31:22.673079] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190f5378 00:18:30.542 [2024-07-15 08:31:22.675311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.542 [2024-07-15 08:31:22.675360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:30.542 [2024-07-15 08:31:22.689461] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190f4b08 00:18:30.542 [2024-07-15 08:31:22.691684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:23148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.542 [2024-07-15 08:31:22.691744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:30.542 [2024-07-15 08:31:22.705908] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190f4298 00:18:30.542 [2024-07-15 08:31:22.708122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:12812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.542 [2024-07-15 08:31:22.708170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:30.801 [2024-07-15 08:31:22.722346] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190f3a28 00:18:30.801 [2024-07-15 08:31:22.724545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:19628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.801 [2024-07-15 08:31:22.724592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:30.801 [2024-07-15 08:31:22.738707] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190f31b8 00:18:30.801 [2024-07-15 08:31:22.740883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:14757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.801 [2024-07-15 08:31:22.740928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:30.801 [2024-07-15 08:31:22.755047] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190f2948 00:18:30.801 [2024-07-15 08:31:22.757159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:16637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.801 [2024-07-15 08:31:22.757203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:30.801 [2024-07-15 08:31:22.771296] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190f20d8 00:18:30.801 [2024-07-15 08:31:22.773365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:14715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.801 [2024-07-15 08:31:22.773419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:30.801 [2024-07-15 08:31:22.787954] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190f1868 00:18:30.801 [2024-07-15 08:31:22.790027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:7118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.801 [2024-07-15 08:31:22.790086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:30.801 [2024-07-15 08:31:22.804671] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190f0ff8 00:18:30.801 [2024-07-15 08:31:22.806715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:13841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.801 [2024-07-15 08:31:22.806785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:30.801 [2024-07-15 08:31:22.821506] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190f0788 00:18:30.801 [2024-07-15 08:31:22.823571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:11824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.801 [2024-07-15 08:31:22.823628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:30.801 [2024-07-15 08:31:22.837749] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190eff18 00:18:30.801 [2024-07-15 08:31:22.839750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:3173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.801 [2024-07-15 08:31:22.839801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:30.801 [2024-07-15 08:31:22.853972] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190ef6a8 00:18:30.801 [2024-07-15 08:31:22.855964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:4683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.801 [2024-07-15 08:31:22.856013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:30.801 [2024-07-15 08:31:22.870215] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190eee38 00:18:30.801 [2024-07-15 08:31:22.872187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.801 [2024-07-15 08:31:22.872232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:30.802 [2024-07-15 08:31:22.886448] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190ee5c8 00:18:30.802 [2024-07-15 08:31:22.888428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:14020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.802 [2024-07-15 08:31:22.888472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:30.802 [2024-07-15 08:31:22.902666] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190edd58 00:18:30.802 [2024-07-15 08:31:22.904608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:2881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.802 [2024-07-15 08:31:22.904658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:30.802 [2024-07-15 08:31:22.918967] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190ed4e8 00:18:30.802 [2024-07-15 08:31:22.920908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:21816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.802 [2024-07-15 08:31:22.920951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:30.802 [2024-07-15 08:31:22.935402] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190ecc78 00:18:30.802 [2024-07-15 08:31:22.937300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:18076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.802 [2024-07-15 08:31:22.937354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:30.802 [2024-07-15 08:31:22.951664] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190ec408 00:18:30.802 [2024-07-15 08:31:22.953523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:16291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.802 [2024-07-15 08:31:22.953573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:30.802 [2024-07-15 08:31:22.968094] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190ebb98 00:18:30.802 [2024-07-15 08:31:22.969961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:21559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.802 [2024-07-15 08:31:22.970011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:31.060 [2024-07-15 08:31:22.984393] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190eb328 00:18:31.060 [2024-07-15 08:31:22.986241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:5336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.060 [2024-07-15 08:31:22.986285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:31.060 [2024-07-15 08:31:23.000845] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190eaab8 00:18:31.060 [2024-07-15 08:31:23.002689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:17032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.060 [2024-07-15 08:31:23.002756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:31.060 [2024-07-15 08:31:23.017254] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190ea248 00:18:31.060 [2024-07-15 08:31:23.019043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:16362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.060 [2024-07-15 08:31:23.019091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:31.060 [2024-07-15 08:31:23.033534] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190e99d8 00:18:31.060 [2024-07-15 08:31:23.035319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:15645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.060 [2024-07-15 08:31:23.035368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:31.060 [2024-07-15 08:31:23.049764] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190e9168 00:18:31.060 [2024-07-15 08:31:23.051517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:11935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.060 [2024-07-15 08:31:23.051565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:31.060 [2024-07-15 08:31:23.066069] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190e88f8 00:18:31.060 [2024-07-15 08:31:23.067812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:1683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.060 [2024-07-15 08:31:23.067856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:31.060 [2024-07-15 08:31:23.082277] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190e8088 00:18:31.060 [2024-07-15 08:31:23.084029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:13026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.060 [2024-07-15 08:31:23.084073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:31.060 [2024-07-15 08:31:23.098636] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190e7818 00:18:31.060 [2024-07-15 08:31:23.100374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:3454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.060 [2024-07-15 08:31:23.100420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:31.060 [2024-07-15 08:31:23.115001] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190e6fa8 00:18:31.060 [2024-07-15 08:31:23.116675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:12080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.060 [2024-07-15 08:31:23.116744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:31.060 [2024-07-15 08:31:23.131274] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190e6738 00:18:31.060 [2024-07-15 08:31:23.132918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:9309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.060 [2024-07-15 08:31:23.132965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:31.060 [2024-07-15 08:31:23.147701] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190e5ec8 00:18:31.060 [2024-07-15 08:31:23.149348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:25097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.060 [2024-07-15 08:31:23.149400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:31.060 [2024-07-15 08:31:23.164040] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190e5658 00:18:31.060 [2024-07-15 08:31:23.165651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:21646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.060 [2024-07-15 08:31:23.165704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:31.060 [2024-07-15 08:31:23.180284] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190e4de8 00:18:31.060 [2024-07-15 08:31:23.181903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:18527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.060 [2024-07-15 08:31:23.181957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:31.061 [2024-07-15 08:31:23.196632] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190e4578 00:18:31.061 [2024-07-15 08:31:23.198230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:16437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.061 [2024-07-15 08:31:23.198282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:31.061 [2024-07-15 08:31:23.212923] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190e3d08 00:18:31.061 [2024-07-15 08:31:23.214466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:11017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.061 [2024-07-15 08:31:23.214518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:31.061 [2024-07-15 08:31:23.229170] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190e3498 00:18:31.061 [2024-07-15 08:31:23.230707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:16182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.061 [2024-07-15 08:31:23.230762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:31.319 [2024-07-15 08:31:23.245471] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190e2c28 00:18:31.319 [2024-07-15 08:31:23.247105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:20271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.319 [2024-07-15 08:31:23.247159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:31.319 [2024-07-15 08:31:23.261930] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190e23b8 00:18:31.319 [2024-07-15 08:31:23.263425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:2843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.319 [2024-07-15 08:31:23.263471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:31.319 [2024-07-15 08:31:23.278305] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190e1b48 00:18:31.319 [2024-07-15 08:31:23.279783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:14781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.319 [2024-07-15 08:31:23.279832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:31.319 [2024-07-15 08:31:23.294746] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190e12d8 00:18:31.319 [2024-07-15 08:31:23.296204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:9741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.319 [2024-07-15 08:31:23.296254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:31.319 [2024-07-15 08:31:23.310981] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190e0a68 00:18:31.319 [2024-07-15 08:31:23.312406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:22996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.319 [2024-07-15 08:31:23.312453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:31.319 [2024-07-15 08:31:23.327325] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190e01f8 00:18:31.319 [2024-07-15 08:31:23.328757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:23742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.319 [2024-07-15 08:31:23.328803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:31.319 [2024-07-15 08:31:23.343590] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190df988 00:18:31.319 [2024-07-15 08:31:23.345085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:1587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.319 [2024-07-15 08:31:23.345134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:31.319 [2024-07-15 08:31:23.360320] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190df118 00:18:31.319 [2024-07-15 08:31:23.361767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.319 [2024-07-15 08:31:23.361823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:31.319 [2024-07-15 08:31:23.376975] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190de8a8 00:18:31.319 [2024-07-15 08:31:23.378346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.319 [2024-07-15 08:31:23.378397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:31.319 [2024-07-15 08:31:23.393553] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190de038 00:18:31.319 [2024-07-15 08:31:23.394895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.319 [2024-07-15 08:31:23.394941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:31.319 [2024-07-15 08:31:23.416599] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190de038 00:18:31.319 [2024-07-15 08:31:23.419192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.319 [2024-07-15 08:31:23.419243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.319 [2024-07-15 08:31:23.433049] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190de8a8 00:18:31.319 [2024-07-15 08:31:23.435601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:24856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.319 [2024-07-15 08:31:23.435652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:31.319 [2024-07-15 08:31:23.449357] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190df118 00:18:31.319 [2024-07-15 08:31:23.451899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:19802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.319 [2024-07-15 08:31:23.451950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:31.319 [2024-07-15 08:31:23.465707] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190df988 00:18:31.319 [2024-07-15 08:31:23.468239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:17513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.319 [2024-07-15 08:31:23.468296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:31.319 [2024-07-15 08:31:23.482135] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190e01f8 00:18:31.319 [2024-07-15 08:31:23.484634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:6525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.319 [2024-07-15 08:31:23.484695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:31.578 [2024-07-15 08:31:23.498835] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190e0a68 00:18:31.578 [2024-07-15 08:31:23.501323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:23273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.578 [2024-07-15 08:31:23.501390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:31.578 [2024-07-15 08:31:23.515709] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190e12d8 00:18:31.578 [2024-07-15 08:31:23.518188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:19089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.578 [2024-07-15 08:31:23.518257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:31.578 [2024-07-15 08:31:23.532615] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190e1b48 00:18:31.578 [2024-07-15 08:31:23.535056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:3392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.578 [2024-07-15 08:31:23.535118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:31.578 [2024-07-15 08:31:23.549090] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190e23b8 00:18:31.578 [2024-07-15 08:31:23.551519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:9048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.578 [2024-07-15 08:31:23.551578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:31.578 [2024-07-15 08:31:23.565439] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190e2c28 00:18:31.578 [2024-07-15 08:31:23.567873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:11305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.578 [2024-07-15 08:31:23.567922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:31.578 [2024-07-15 08:31:23.581692] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190e3498 00:18:31.578 [2024-07-15 08:31:23.584082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:9292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.578 [2024-07-15 08:31:23.584136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:31.578 [2024-07-15 08:31:23.597974] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190e3d08 00:18:31.578 [2024-07-15 08:31:23.600325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:15985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.578 [2024-07-15 08:31:23.600379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:31.578 [2024-07-15 08:31:23.614263] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190e4578 00:18:31.578 [2024-07-15 08:31:23.616630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:15618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.578 [2024-07-15 08:31:23.616683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:31.578 [2024-07-15 08:31:23.630750] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190e4de8 00:18:31.578 [2024-07-15 08:31:23.633102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:7645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.578 [2024-07-15 08:31:23.633155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:31.578 [2024-07-15 08:31:23.647073] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190e5658 00:18:31.578 [2024-07-15 08:31:23.649386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:12460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.579 [2024-07-15 08:31:23.649433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:31.579 [2024-07-15 08:31:23.663260] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190e5ec8 00:18:31.579 [2024-07-15 08:31:23.665541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:13086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.579 [2024-07-15 08:31:23.665591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:31.579 [2024-07-15 08:31:23.679464] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190e6738 00:18:31.579 [2024-07-15 08:31:23.681711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:16759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.579 [2024-07-15 08:31:23.681774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:31.579 [2024-07-15 08:31:23.695859] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190e6fa8 00:18:31.579 [2024-07-15 08:31:23.698097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:6334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.579 [2024-07-15 08:31:23.698153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:31.579 [2024-07-15 08:31:23.712302] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190e7818 00:18:31.579 [2024-07-15 08:31:23.714547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:15109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.579 [2024-07-15 08:31:23.714615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:31.579 [2024-07-15 08:31:23.729373] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190e8088 00:18:31.579 [2024-07-15 08:31:23.731629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:1300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.579 [2024-07-15 08:31:23.731696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:31.579 [2024-07-15 08:31:23.746373] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190e88f8 00:18:31.579 [2024-07-15 08:31:23.748589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:3040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.579 [2024-07-15 08:31:23.748653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:31.838 [2024-07-15 08:31:23.763229] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190e9168 00:18:31.838 [2024-07-15 08:31:23.765401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:24370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.838 [2024-07-15 08:31:23.765460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:31.838 [2024-07-15 08:31:23.779709] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190e99d8 00:18:31.838 [2024-07-15 08:31:23.781857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:15837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.838 [2024-07-15 08:31:23.781907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:31.838 [2024-07-15 08:31:23.796096] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190ea248 00:18:31.838 [2024-07-15 08:31:23.798249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.838 [2024-07-15 08:31:23.798297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:31.838 [2024-07-15 08:31:23.812539] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190eaab8 00:18:31.838 [2024-07-15 08:31:23.814630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:24986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.838 [2024-07-15 08:31:23.814679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:31.838 [2024-07-15 08:31:23.828857] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190eb328 00:18:31.838 [2024-07-15 08:31:23.830938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:10066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.838 [2024-07-15 08:31:23.830989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:31.838 [2024-07-15 08:31:23.845211] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190ebb98 00:18:31.838 [2024-07-15 08:31:23.847281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:23782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.838 [2024-07-15 08:31:23.847335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:31.838 [2024-07-15 08:31:23.861559] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190ec408 00:18:31.838 [2024-07-15 08:31:23.863645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:13710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.838 [2024-07-15 08:31:23.863699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:31.838 [2024-07-15 08:31:23.877984] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190ecc78 00:18:31.838 [2024-07-15 08:31:23.880035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:10289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.838 [2024-07-15 08:31:23.880084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:31.838 [2024-07-15 08:31:23.894393] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190ed4e8 00:18:31.838 [2024-07-15 08:31:23.896410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.838 [2024-07-15 08:31:23.896463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:31.839 [2024-07-15 08:31:23.910921] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190edd58 00:18:31.839 [2024-07-15 08:31:23.912962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.839 [2024-07-15 08:31:23.913019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:31.839 [2024-07-15 08:31:23.927625] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190ee5c8 00:18:31.839 [2024-07-15 08:31:23.929628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.839 [2024-07-15 08:31:23.929686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:31.839 [2024-07-15 08:31:23.944176] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190eee38 00:18:31.839 [2024-07-15 08:31:23.946143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.839 [2024-07-15 08:31:23.946188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:31.839 [2024-07-15 08:31:23.960541] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190ef6a8 00:18:31.839 [2024-07-15 08:31:23.962481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.839 [2024-07-15 08:31:23.962526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:31.839 [2024-07-15 08:31:23.976959] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190eff18 00:18:31.839 [2024-07-15 08:31:23.978957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.839 [2024-07-15 08:31:23.979001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:31.839 [2024-07-15 08:31:23.993334] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190f0788 00:18:31.839 [2024-07-15 08:31:23.995223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:1049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.839 [2024-07-15 08:31:23.995278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:31.839 [2024-07-15 08:31:24.009694] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190f0ff8 00:18:31.839 [2024-07-15 08:31:24.011608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.839 [2024-07-15 08:31:24.011659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:32.098 [2024-07-15 08:31:24.026136] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190f1868 00:18:32.098 [2024-07-15 08:31:24.028006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:5104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.098 [2024-07-15 08:31:24.028053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:32.098 [2024-07-15 08:31:24.042466] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190f20d8 00:18:32.098 [2024-07-15 08:31:24.044325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:9204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.098 [2024-07-15 08:31:24.044375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:32.098 [2024-07-15 08:31:24.058845] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190f2948 00:18:32.098 [2024-07-15 08:31:24.060664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.098 [2024-07-15 08:31:24.060733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:32.098 [2024-07-15 08:31:24.075262] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190f31b8 00:18:32.098 [2024-07-15 08:31:24.077075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:13666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.098 [2024-07-15 08:31:24.077128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:32.098 [2024-07-15 08:31:24.091616] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190f3a28 00:18:32.098 [2024-07-15 08:31:24.093401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:2920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.098 [2024-07-15 08:31:24.093460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:32.098 [2024-07-15 08:31:24.108106] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190f4298 00:18:32.098 [2024-07-15 08:31:24.109912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:11907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.098 [2024-07-15 08:31:24.109966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:32.098 [2024-07-15 08:31:24.124754] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190f4b08 00:18:32.098 [2024-07-15 08:31:24.126511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:20466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.098 [2024-07-15 08:31:24.126566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:32.098 [2024-07-15 08:31:24.141418] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190f5378 00:18:32.098 [2024-07-15 08:31:24.143156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:2205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.098 [2024-07-15 08:31:24.143211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:32.098 [2024-07-15 08:31:24.157803] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190f5be8 00:18:32.098 [2024-07-15 08:31:24.159501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:19537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.098 [2024-07-15 08:31:24.159555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:32.098 [2024-07-15 08:31:24.174293] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190f6458 00:18:32.098 [2024-07-15 08:31:24.176002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:15250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.098 [2024-07-15 08:31:24.176055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:32.098 [2024-07-15 08:31:24.190807] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190f6cc8 00:18:32.098 [2024-07-15 08:31:24.192495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:5986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.098 [2024-07-15 08:31:24.192554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:32.098 [2024-07-15 08:31:24.207361] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190f7538 00:18:32.098 [2024-07-15 08:31:24.209040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:12607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.098 [2024-07-15 08:31:24.209102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:32.098 [2024-07-15 08:31:24.223935] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190f7da8 00:18:32.098 [2024-07-15 08:31:24.225563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:24801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.098 [2024-07-15 08:31:24.225616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:32.098 [2024-07-15 08:31:24.240367] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190f8618 00:18:32.098 [2024-07-15 08:31:24.241993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:15352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.098 [2024-07-15 08:31:24.242043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:32.098 [2024-07-15 08:31:24.256733] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190f8e88 00:18:32.098 [2024-07-15 08:31:24.258291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:8023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.098 [2024-07-15 08:31:24.258344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:32.357 [2024-07-15 08:31:24.274046] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190f96f8 00:18:32.357 [2024-07-15 08:31:24.275616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:15346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.357 [2024-07-15 08:31:24.275677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:32.357 [2024-07-15 08:31:24.290461] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190f9f68 00:18:32.357 [2024-07-15 08:31:24.292052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:11522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.357 [2024-07-15 08:31:24.292099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:32.357 [2024-07-15 08:31:24.307038] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190fa7d8 00:18:32.357 [2024-07-15 08:31:24.308598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:21430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.357 [2024-07-15 08:31:24.308645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:32.357 [2024-07-15 08:31:24.323423] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d360) with pdu=0x2000190fb048 00:18:32.357 [2024-07-15 08:31:24.324946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:25553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.357 [2024-07-15 08:31:24.324991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:32.357 00:18:32.357 Latency(us) 00:18:32.357 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:32.357 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:32.357 nvme0n1 : 2.00 15356.64 59.99 0.00 0.00 8327.23 5213.09 32172.22 00:18:32.357 =================================================================================================================== 00:18:32.357 Total : 15356.64 59.99 0.00 0.00 8327.23 5213.09 32172.22 00:18:32.357 0 00:18:32.357 08:31:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:32.357 08:31:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:32.357 08:31:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:32.357 | .driver_specific 00:18:32.357 | .nvme_error 00:18:32.357 | .status_code 00:18:32.357 | .command_transient_transport_error' 00:18:32.357 08:31:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:32.615 08:31:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 120 > 0 )) 00:18:32.615 08:31:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80771 00:18:32.615 08:31:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 80771 ']' 00:18:32.615 08:31:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 80771 00:18:32.615 08:31:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:18:32.615 08:31:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:32.615 08:31:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80771 00:18:32.615 killing process with pid 80771 00:18:32.615 Received shutdown signal, test time was about 2.000000 seconds 00:18:32.615 00:18:32.615 Latency(us) 00:18:32.615 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:32.615 =================================================================================================================== 00:18:32.615 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:32.615 08:31:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:32.615 08:31:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:32.615 08:31:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80771' 00:18:32.615 08:31:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 80771 00:18:32.615 08:31:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 80771 00:18:32.874 08:31:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:18:32.874 08:31:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:32.874 08:31:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:18:32.874 08:31:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:18:32.874 08:31:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:18:32.874 08:31:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80826 00:18:32.874 08:31:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:18:32.874 08:31:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80826 /var/tmp/bperf.sock 00:18:32.874 08:31:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 80826 ']' 00:18:32.874 08:31:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:32.874 08:31:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:32.874 08:31:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:32.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:32.874 08:31:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:32.874 08:31:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:32.874 [2024-07-15 08:31:24.962465] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:32.874 [2024-07-15 08:31:24.962861] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80826 ] 00:18:32.874 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:32.874 Zero copy mechanism will not be used. 00:18:33.134 [2024-07-15 08:31:25.103198] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.134 [2024-07-15 08:31:25.230946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:33.134 [2024-07-15 08:31:25.287250] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:34.070 08:31:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:34.070 08:31:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:18:34.070 08:31:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:34.070 08:31:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:34.070 08:31:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:34.070 08:31:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.070 08:31:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:34.070 08:31:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.070 08:31:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:34.070 08:31:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:34.637 nvme0n1 00:18:34.637 08:31:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:18:34.637 08:31:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.637 08:31:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:34.637 08:31:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.637 08:31:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:34.637 08:31:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:34.637 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:34.637 Zero copy mechanism will not be used. 00:18:34.637 Running I/O for 2 seconds... 00:18:34.637 [2024-07-15 08:31:26.724254] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:34.637 [2024-07-15 08:31:26.724599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.637 [2024-07-15 08:31:26.724632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.637 [2024-07-15 08:31:26.729669] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:34.637 [2024-07-15 08:31:26.729987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.637 [2024-07-15 08:31:26.730213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.637 [2024-07-15 08:31:26.735343] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:34.637 [2024-07-15 08:31:26.735764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.637 [2024-07-15 08:31:26.735879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.637 [2024-07-15 08:31:26.740947] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:34.637 [2024-07-15 08:31:26.741332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.637 [2024-07-15 08:31:26.741362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.637 [2024-07-15 08:31:26.746592] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:34.637 [2024-07-15 08:31:26.747162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.637 [2024-07-15 08:31:26.747481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.637 [2024-07-15 08:31:26.752821] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:34.637 [2024-07-15 08:31:26.753374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.637 [2024-07-15 08:31:26.753656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.637 [2024-07-15 08:31:26.758909] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:34.637 [2024-07-15 08:31:26.759446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.637 [2024-07-15 08:31:26.759701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.637 [2024-07-15 08:31:26.765014] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:34.637 [2024-07-15 08:31:26.765389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.637 [2024-07-15 08:31:26.765485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.637 [2024-07-15 08:31:26.770667] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:34.637 [2024-07-15 08:31:26.771224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.637 [2024-07-15 08:31:26.771487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.637 [2024-07-15 08:31:26.776661] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:34.637 [2024-07-15 08:31:26.777213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.637 [2024-07-15 08:31:26.777481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.638 [2024-07-15 08:31:26.782678] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:34.638 [2024-07-15 08:31:26.783295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.638 [2024-07-15 08:31:26.783650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.638 [2024-07-15 08:31:26.788997] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:34.638 [2024-07-15 08:31:26.789626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.638 [2024-07-15 08:31:26.789754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.638 [2024-07-15 08:31:26.795103] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:34.638 [2024-07-15 08:31:26.795832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.638 [2024-07-15 08:31:26.796124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.638 [2024-07-15 08:31:26.801481] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:34.638 [2024-07-15 08:31:26.802134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.638 [2024-07-15 08:31:26.802492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.638 [2024-07-15 08:31:26.807833] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:34.638 [2024-07-15 08:31:26.808471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.638 [2024-07-15 08:31:26.808581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.897 [2024-07-15 08:31:26.813671] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:34.897 [2024-07-15 08:31:26.814099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.897 [2024-07-15 08:31:26.814305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.897 [2024-07-15 08:31:26.819407] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:34.897 [2024-07-15 08:31:26.819809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.897 [2024-07-15 08:31:26.819923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.897 [2024-07-15 08:31:26.824916] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:34.897 [2024-07-15 08:31:26.825280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.897 [2024-07-15 08:31:26.825392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.897 [2024-07-15 08:31:26.830163] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:34.897 [2024-07-15 08:31:26.830303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.897 [2024-07-15 08:31:26.830397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.897 [2024-07-15 08:31:26.835437] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:34.897 [2024-07-15 08:31:26.835793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.897 [2024-07-15 08:31:26.836111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.897 [2024-07-15 08:31:26.841325] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:34.897 [2024-07-15 08:31:26.841477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.897 [2024-07-15 08:31:26.841571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.897 [2024-07-15 08:31:26.846566] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:34.897 [2024-07-15 08:31:26.846692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.897 [2024-07-15 08:31:26.846814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.897 [2024-07-15 08:31:26.851820] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:34.897 [2024-07-15 08:31:26.851963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.897 [2024-07-15 08:31:26.852100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.897 [2024-07-15 08:31:26.857069] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:34.897 [2024-07-15 08:31:26.857219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.897 [2024-07-15 08:31:26.857312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.897 [2024-07-15 08:31:26.862276] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:34.897 [2024-07-15 08:31:26.862427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.897 [2024-07-15 08:31:26.862545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.897 [2024-07-15 08:31:26.867506] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:34.897 [2024-07-15 08:31:26.867664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.897 [2024-07-15 08:31:26.867792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.897 [2024-07-15 08:31:26.872800] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:34.897 [2024-07-15 08:31:26.872966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.897 [2024-07-15 08:31:26.873068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.897 [2024-07-15 08:31:26.878299] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:34.897 [2024-07-15 08:31:26.878601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.897 [2024-07-15 08:31:26.878964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.897 [2024-07-15 08:31:26.884146] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:34.897 [2024-07-15 08:31:26.884375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.897 [2024-07-15 08:31:26.884521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.897 [2024-07-15 08:31:26.889881] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:34.897 [2024-07-15 08:31:26.890161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.897 [2024-07-15 08:31:26.890282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.897 [2024-07-15 08:31:26.895386] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:34.897 [2024-07-15 08:31:26.895532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.897 [2024-07-15 08:31:26.895632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.897 [2024-07-15 08:31:26.900626] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:34.897 [2024-07-15 08:31:26.900785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.897 [2024-07-15 08:31:26.900891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.897 [2024-07-15 08:31:26.905847] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:34.898 [2024-07-15 08:31:26.905990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.898 [2024-07-15 08:31:26.906082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.898 [2024-07-15 08:31:26.911106] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:34.898 [2024-07-15 08:31:26.911256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.898 [2024-07-15 08:31:26.911387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.898 [2024-07-15 08:31:26.916358] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:34.898 [2024-07-15 08:31:26.916504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.898 [2024-07-15 08:31:26.916599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.898 [2024-07-15 08:31:26.921852] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:34.898 [2024-07-15 08:31:26.922142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.898 [2024-07-15 08:31:26.922469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.898 [2024-07-15 08:31:26.927640] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:34.898 [2024-07-15 08:31:26.928005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.898 [2024-07-15 08:31:26.928282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.898 [2024-07-15 08:31:26.933429] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:34.898 [2024-07-15 08:31:26.933707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.898 [2024-07-15 08:31:26.933822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.898 [2024-07-15 08:31:26.938885] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:34.898 [2024-07-15 08:31:26.938964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.898 [2024-07-15 08:31:26.938989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.898 [2024-07-15 08:31:26.944153] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:34.898 [2024-07-15 08:31:26.944228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.898 [2024-07-15 08:31:26.944251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.898 [2024-07-15 08:31:26.949356] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:34.898 [2024-07-15 08:31:26.949433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.898 [2024-07-15 08:31:26.949457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.898 [2024-07-15 08:31:26.954575] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:34.898 [2024-07-15 08:31:26.954650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.898 [2024-07-15 08:31:26.954674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.898 [2024-07-15 08:31:26.959862] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:34.898 [2024-07-15 08:31:26.959948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.898 [2024-07-15 08:31:26.959972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.898 [2024-07-15 08:31:26.965078] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:34.898 [2024-07-15 08:31:26.965155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.898 [2024-07-15 08:31:26.965178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.898 [2024-07-15 08:31:26.970369] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:34.898 [2024-07-15 08:31:26.970460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.898 [2024-07-15 08:31:26.970484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.898 [2024-07-15 08:31:26.975672] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:34.898 [2024-07-15 08:31:26.975781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.898 [2024-07-15 08:31:26.975811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.898 [2024-07-15 08:31:26.980966] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:34.898 [2024-07-15 08:31:26.981047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.898 [2024-07-15 08:31:26.981071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.898 [2024-07-15 08:31:26.986214] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:34.898 [2024-07-15 08:31:26.986288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.898 [2024-07-15 08:31:26.986334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.898 [2024-07-15 08:31:26.991440] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:34.898 [2024-07-15 08:31:26.991515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.898 [2024-07-15 08:31:26.991539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.898 [2024-07-15 08:31:26.996676] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:34.898 [2024-07-15 08:31:26.996762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.898 [2024-07-15 08:31:26.996785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.898 [2024-07-15 08:31:27.001930] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:34.898 [2024-07-15 08:31:27.002003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.898 [2024-07-15 08:31:27.002034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.898 [2024-07-15 08:31:27.007157] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:34.898 [2024-07-15 08:31:27.007233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.898 [2024-07-15 08:31:27.007258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.898 [2024-07-15 08:31:27.012403] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:34.898 [2024-07-15 08:31:27.012477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.898 [2024-07-15 08:31:27.012500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.898 [2024-07-15 08:31:27.017641] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:34.898 [2024-07-15 08:31:27.017729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.898 [2024-07-15 08:31:27.017754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.898 [2024-07-15 08:31:27.022833] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:34.898 [2024-07-15 08:31:27.022906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.898 [2024-07-15 08:31:27.022930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.898 [2024-07-15 08:31:27.028041] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:34.898 [2024-07-15 08:31:27.028115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.898 [2024-07-15 08:31:27.028138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.898 [2024-07-15 08:31:27.033297] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:34.898 [2024-07-15 08:31:27.033381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.898 [2024-07-15 08:31:27.033405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.898 [2024-07-15 08:31:27.038517] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:34.898 [2024-07-15 08:31:27.038606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.898 [2024-07-15 08:31:27.038631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.898 [2024-07-15 08:31:27.043867] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:34.898 [2024-07-15 08:31:27.043952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.898 [2024-07-15 08:31:27.043977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.898 [2024-07-15 08:31:27.049126] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:34.898 [2024-07-15 08:31:27.049205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.898 [2024-07-15 08:31:27.049227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:34.898 [2024-07-15 08:31:27.054398] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:34.898 [2024-07-15 08:31:27.054483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.898 [2024-07-15 08:31:27.054505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:34.898 [2024-07-15 08:31:27.059685] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:34.899 [2024-07-15 08:31:27.059770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.899 [2024-07-15 08:31:27.059793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:34.899 [2024-07-15 08:31:27.064948] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:34.899 [2024-07-15 08:31:27.065022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.899 [2024-07-15 08:31:27.065044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:34.899 [2024-07-15 08:31:27.070086] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:34.899 [2024-07-15 08:31:27.070168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:34.899 [2024-07-15 08:31:27.070189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.158 [2024-07-15 08:31:27.075280] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.158 [2024-07-15 08:31:27.075351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.158 [2024-07-15 08:31:27.075373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.158 [2024-07-15 08:31:27.080472] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.158 [2024-07-15 08:31:27.080551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.158 [2024-07-15 08:31:27.080573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.158 [2024-07-15 08:31:27.085686] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.158 [2024-07-15 08:31:27.085768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.158 [2024-07-15 08:31:27.085790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.158 [2024-07-15 08:31:27.090883] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.158 [2024-07-15 08:31:27.090955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.158 [2024-07-15 08:31:27.090977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.158 [2024-07-15 08:31:27.096099] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.158 [2024-07-15 08:31:27.096172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.158 [2024-07-15 08:31:27.096202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.158 [2024-07-15 08:31:27.101270] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.158 [2024-07-15 08:31:27.101352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.158 [2024-07-15 08:31:27.101374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.158 [2024-07-15 08:31:27.106480] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.158 [2024-07-15 08:31:27.106552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.158 [2024-07-15 08:31:27.106574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.158 [2024-07-15 08:31:27.111746] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.158 [2024-07-15 08:31:27.111840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.158 [2024-07-15 08:31:27.111862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.158 [2024-07-15 08:31:27.117117] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.158 [2024-07-15 08:31:27.117209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.158 [2024-07-15 08:31:27.117233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.158 [2024-07-15 08:31:27.122387] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.158 [2024-07-15 08:31:27.122469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.158 [2024-07-15 08:31:27.122495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.158 [2024-07-15 08:31:27.127621] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.158 [2024-07-15 08:31:27.127705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.158 [2024-07-15 08:31:27.127746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.158 [2024-07-15 08:31:27.132893] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.158 [2024-07-15 08:31:27.132975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.158 [2024-07-15 08:31:27.132998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.158 [2024-07-15 08:31:27.138136] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.158 [2024-07-15 08:31:27.138209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.158 [2024-07-15 08:31:27.138231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.158 [2024-07-15 08:31:27.143415] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.158 [2024-07-15 08:31:27.143488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.158 [2024-07-15 08:31:27.143510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.158 [2024-07-15 08:31:27.148631] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.158 [2024-07-15 08:31:27.148714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.158 [2024-07-15 08:31:27.148766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.158 [2024-07-15 08:31:27.154052] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.158 [2024-07-15 08:31:27.154134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.158 [2024-07-15 08:31:27.154158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.158 [2024-07-15 08:31:27.159303] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.158 [2024-07-15 08:31:27.159382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.158 [2024-07-15 08:31:27.159406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.158 [2024-07-15 08:31:27.164607] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.158 [2024-07-15 08:31:27.164682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.158 [2024-07-15 08:31:27.164706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.158 [2024-07-15 08:31:27.169842] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.158 [2024-07-15 08:31:27.169919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.158 [2024-07-15 08:31:27.169950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.158 [2024-07-15 08:31:27.175085] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.158 [2024-07-15 08:31:27.175161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.158 [2024-07-15 08:31:27.175192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.158 [2024-07-15 08:31:27.180323] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.158 [2024-07-15 08:31:27.180398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.158 [2024-07-15 08:31:27.180422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.158 [2024-07-15 08:31:27.185558] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.158 [2024-07-15 08:31:27.185631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.158 [2024-07-15 08:31:27.185655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.158 [2024-07-15 08:31:27.190853] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.158 [2024-07-15 08:31:27.190927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.158 [2024-07-15 08:31:27.190949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.159 [2024-07-15 08:31:27.196074] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.159 [2024-07-15 08:31:27.196152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.159 [2024-07-15 08:31:27.196174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.159 [2024-07-15 08:31:27.201353] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.159 [2024-07-15 08:31:27.201448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.159 [2024-07-15 08:31:27.201472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.159 [2024-07-15 08:31:27.206654] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.159 [2024-07-15 08:31:27.206753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.159 [2024-07-15 08:31:27.206787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.159 [2024-07-15 08:31:27.211927] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.159 [2024-07-15 08:31:27.212002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.159 [2024-07-15 08:31:27.212024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.159 [2024-07-15 08:31:27.217104] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.159 [2024-07-15 08:31:27.217178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.159 [2024-07-15 08:31:27.217201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.159 [2024-07-15 08:31:27.222322] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.159 [2024-07-15 08:31:27.222395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.159 [2024-07-15 08:31:27.222422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.159 [2024-07-15 08:31:27.227526] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.159 [2024-07-15 08:31:27.227599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.159 [2024-07-15 08:31:27.227622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.159 [2024-07-15 08:31:27.232789] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.159 [2024-07-15 08:31:27.232864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.159 [2024-07-15 08:31:27.232886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.159 [2024-07-15 08:31:27.237975] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.159 [2024-07-15 08:31:27.238049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.159 [2024-07-15 08:31:27.238071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.159 [2024-07-15 08:31:27.243202] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.159 [2024-07-15 08:31:27.243286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.159 [2024-07-15 08:31:27.243309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.159 [2024-07-15 08:31:27.248401] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.159 [2024-07-15 08:31:27.248473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.159 [2024-07-15 08:31:27.248494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.159 [2024-07-15 08:31:27.253630] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.159 [2024-07-15 08:31:27.253703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.159 [2024-07-15 08:31:27.253739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.159 [2024-07-15 08:31:27.258848] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.159 [2024-07-15 08:31:27.258921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.159 [2024-07-15 08:31:27.258942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.159 [2024-07-15 08:31:27.264086] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.159 [2024-07-15 08:31:27.264160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.159 [2024-07-15 08:31:27.264182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.159 [2024-07-15 08:31:27.269304] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.159 [2024-07-15 08:31:27.269375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.159 [2024-07-15 08:31:27.269397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.159 [2024-07-15 08:31:27.274469] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.159 [2024-07-15 08:31:27.274544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.159 [2024-07-15 08:31:27.274566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.159 [2024-07-15 08:31:27.279754] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.159 [2024-07-15 08:31:27.279831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.159 [2024-07-15 08:31:27.279854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.159 [2024-07-15 08:31:27.285023] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.159 [2024-07-15 08:31:27.285109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.159 [2024-07-15 08:31:27.285134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.159 [2024-07-15 08:31:27.290225] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.159 [2024-07-15 08:31:27.290306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.159 [2024-07-15 08:31:27.290329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.159 [2024-07-15 08:31:27.295516] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.159 [2024-07-15 08:31:27.295594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.159 [2024-07-15 08:31:27.295617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.159 [2024-07-15 08:31:27.300742] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.159 [2024-07-15 08:31:27.300825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.159 [2024-07-15 08:31:27.300847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.159 [2024-07-15 08:31:27.305962] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.159 [2024-07-15 08:31:27.306032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.159 [2024-07-15 08:31:27.306054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.159 [2024-07-15 08:31:27.311130] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.159 [2024-07-15 08:31:27.311208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.159 [2024-07-15 08:31:27.311229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.159 [2024-07-15 08:31:27.316399] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.159 [2024-07-15 08:31:27.316471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.159 [2024-07-15 08:31:27.316495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.159 [2024-07-15 08:31:27.321714] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.159 [2024-07-15 08:31:27.321805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.159 [2024-07-15 08:31:27.321830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.159 [2024-07-15 08:31:27.327012] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.159 [2024-07-15 08:31:27.327089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.159 [2024-07-15 08:31:27.327112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.159 [2024-07-15 08:31:27.332305] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.159 [2024-07-15 08:31:27.332377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.159 [2024-07-15 08:31:27.332400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.418 [2024-07-15 08:31:27.337579] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.418 [2024-07-15 08:31:27.337661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.418 [2024-07-15 08:31:27.337686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.418 [2024-07-15 08:31:27.342883] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.418 [2024-07-15 08:31:27.342961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.418 [2024-07-15 08:31:27.342985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.418 [2024-07-15 08:31:27.348098] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.418 [2024-07-15 08:31:27.348174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.418 [2024-07-15 08:31:27.348196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.418 [2024-07-15 08:31:27.353365] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.418 [2024-07-15 08:31:27.353446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.418 [2024-07-15 08:31:27.353468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.418 [2024-07-15 08:31:27.358628] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.418 [2024-07-15 08:31:27.358712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.418 [2024-07-15 08:31:27.358769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.418 [2024-07-15 08:31:27.363879] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.418 [2024-07-15 08:31:27.363962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.418 [2024-07-15 08:31:27.363985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.418 [2024-07-15 08:31:27.369065] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.418 [2024-07-15 08:31:27.369153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.418 [2024-07-15 08:31:27.369178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.418 [2024-07-15 08:31:27.374329] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.418 [2024-07-15 08:31:27.374409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.418 [2024-07-15 08:31:27.374433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.418 [2024-07-15 08:31:27.379537] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.418 [2024-07-15 08:31:27.379614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.418 [2024-07-15 08:31:27.379638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.418 [2024-07-15 08:31:27.384776] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.418 [2024-07-15 08:31:27.384849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.418 [2024-07-15 08:31:27.384872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.418 [2024-07-15 08:31:27.390008] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.418 [2024-07-15 08:31:27.390079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.418 [2024-07-15 08:31:27.390102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.418 [2024-07-15 08:31:27.395230] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.418 [2024-07-15 08:31:27.395314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.418 [2024-07-15 08:31:27.395337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.418 [2024-07-15 08:31:27.400421] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.418 [2024-07-15 08:31:27.400494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.418 [2024-07-15 08:31:27.400516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.418 [2024-07-15 08:31:27.405664] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.418 [2024-07-15 08:31:27.405753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.418 [2024-07-15 08:31:27.405776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.418 [2024-07-15 08:31:27.410973] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.418 [2024-07-15 08:31:27.411067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.418 [2024-07-15 08:31:27.411091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.418 [2024-07-15 08:31:27.416221] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.418 [2024-07-15 08:31:27.416305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.418 [2024-07-15 08:31:27.416332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.418 [2024-07-15 08:31:27.421409] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.418 [2024-07-15 08:31:27.421497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.418 [2024-07-15 08:31:27.421520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.418 [2024-07-15 08:31:27.426738] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.418 [2024-07-15 08:31:27.426823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.418 [2024-07-15 08:31:27.426846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.418 [2024-07-15 08:31:27.431993] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.419 [2024-07-15 08:31:27.432068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.419 [2024-07-15 08:31:27.432093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.419 [2024-07-15 08:31:27.437211] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.419 [2024-07-15 08:31:27.437295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.419 [2024-07-15 08:31:27.437317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.419 [2024-07-15 08:31:27.442454] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.419 [2024-07-15 08:31:27.442525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.419 [2024-07-15 08:31:27.442549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.419 [2024-07-15 08:31:27.447712] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.419 [2024-07-15 08:31:27.447812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.419 [2024-07-15 08:31:27.447835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.419 [2024-07-15 08:31:27.452927] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.419 [2024-07-15 08:31:27.452999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.419 [2024-07-15 08:31:27.453022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.419 [2024-07-15 08:31:27.458123] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.419 [2024-07-15 08:31:27.458196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.419 [2024-07-15 08:31:27.458219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.419 [2024-07-15 08:31:27.463434] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.419 [2024-07-15 08:31:27.463509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.419 [2024-07-15 08:31:27.463533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.419 [2024-07-15 08:31:27.468686] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.419 [2024-07-15 08:31:27.468773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.419 [2024-07-15 08:31:27.468796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.419 [2024-07-15 08:31:27.473971] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.419 [2024-07-15 08:31:27.474046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.419 [2024-07-15 08:31:27.474068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.419 [2024-07-15 08:31:27.479167] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.419 [2024-07-15 08:31:27.479239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.419 [2024-07-15 08:31:27.479272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.419 [2024-07-15 08:31:27.484391] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.419 [2024-07-15 08:31:27.484463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.419 [2024-07-15 08:31:27.484485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.419 [2024-07-15 08:31:27.489617] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.419 [2024-07-15 08:31:27.489697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.419 [2024-07-15 08:31:27.489731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.419 [2024-07-15 08:31:27.494844] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.419 [2024-07-15 08:31:27.494918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.419 [2024-07-15 08:31:27.494940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.419 [2024-07-15 08:31:27.500061] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.419 [2024-07-15 08:31:27.500142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.419 [2024-07-15 08:31:27.500174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.419 [2024-07-15 08:31:27.505234] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.419 [2024-07-15 08:31:27.505324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.419 [2024-07-15 08:31:27.505348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.419 [2024-07-15 08:31:27.510452] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.419 [2024-07-15 08:31:27.510525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.419 [2024-07-15 08:31:27.510547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.419 [2024-07-15 08:31:27.515751] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.419 [2024-07-15 08:31:27.515832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.419 [2024-07-15 08:31:27.515856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.419 [2024-07-15 08:31:27.520954] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.419 [2024-07-15 08:31:27.521027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.419 [2024-07-15 08:31:27.521050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.419 [2024-07-15 08:31:27.526752] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.419 [2024-07-15 08:31:27.526849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.419 [2024-07-15 08:31:27.526871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.419 [2024-07-15 08:31:27.531993] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.419 [2024-07-15 08:31:27.532081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.419 [2024-07-15 08:31:27.532105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.419 [2024-07-15 08:31:27.537246] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.419 [2024-07-15 08:31:27.537320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.419 [2024-07-15 08:31:27.537343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.419 [2024-07-15 08:31:27.542422] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.419 [2024-07-15 08:31:27.542498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.419 [2024-07-15 08:31:27.542524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.419 [2024-07-15 08:31:27.547662] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.419 [2024-07-15 08:31:27.547771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.419 [2024-07-15 08:31:27.547808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.419 [2024-07-15 08:31:27.552840] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.419 [2024-07-15 08:31:27.552928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.419 [2024-07-15 08:31:27.552951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.419 [2024-07-15 08:31:27.558119] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.419 [2024-07-15 08:31:27.558204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.419 [2024-07-15 08:31:27.558228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.419 [2024-07-15 08:31:27.563340] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.419 [2024-07-15 08:31:27.563414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.419 [2024-07-15 08:31:27.563438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.419 [2024-07-15 08:31:27.568615] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.419 [2024-07-15 08:31:27.568704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.419 [2024-07-15 08:31:27.568741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.419 [2024-07-15 08:31:27.573866] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.419 [2024-07-15 08:31:27.573967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.419 [2024-07-15 08:31:27.573990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.419 [2024-07-15 08:31:27.579080] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.419 [2024-07-15 08:31:27.579169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.419 [2024-07-15 08:31:27.579191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.419 [2024-07-15 08:31:27.584363] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.420 [2024-07-15 08:31:27.584439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.420 [2024-07-15 08:31:27.584463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.420 [2024-07-15 08:31:27.589609] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.420 [2024-07-15 08:31:27.589694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.420 [2024-07-15 08:31:27.589731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.679 [2024-07-15 08:31:27.594941] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.679 [2024-07-15 08:31:27.595032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.679 [2024-07-15 08:31:27.595057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.679 [2024-07-15 08:31:27.600227] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.679 [2024-07-15 08:31:27.600312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.679 [2024-07-15 08:31:27.600336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.679 [2024-07-15 08:31:27.605456] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.679 [2024-07-15 08:31:27.605556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.679 [2024-07-15 08:31:27.605580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.679 [2024-07-15 08:31:27.610777] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.679 [2024-07-15 08:31:27.610865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.679 [2024-07-15 08:31:27.610889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.679 [2024-07-15 08:31:27.616059] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.679 [2024-07-15 08:31:27.616148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.679 [2024-07-15 08:31:27.616172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.679 [2024-07-15 08:31:27.621312] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.679 [2024-07-15 08:31:27.621403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.679 [2024-07-15 08:31:27.621427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.679 [2024-07-15 08:31:27.626557] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.679 [2024-07-15 08:31:27.626647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.679 [2024-07-15 08:31:27.626671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.679 [2024-07-15 08:31:27.631814] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.679 [2024-07-15 08:31:27.631889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.679 [2024-07-15 08:31:27.631914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.679 [2024-07-15 08:31:27.637049] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.679 [2024-07-15 08:31:27.637139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.679 [2024-07-15 08:31:27.637164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.679 [2024-07-15 08:31:27.642333] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.679 [2024-07-15 08:31:27.642417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.679 [2024-07-15 08:31:27.642443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.679 [2024-07-15 08:31:27.647611] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.679 [2024-07-15 08:31:27.647694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.680 [2024-07-15 08:31:27.647734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.680 [2024-07-15 08:31:27.652865] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.680 [2024-07-15 08:31:27.652961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.680 [2024-07-15 08:31:27.652992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.680 [2024-07-15 08:31:27.658103] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.680 [2024-07-15 08:31:27.658193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.680 [2024-07-15 08:31:27.658217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.680 [2024-07-15 08:31:27.663387] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.680 [2024-07-15 08:31:27.663471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.680 [2024-07-15 08:31:27.663496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.680 [2024-07-15 08:31:27.668635] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.680 [2024-07-15 08:31:27.668732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.680 [2024-07-15 08:31:27.668766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.680 [2024-07-15 08:31:27.673906] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.680 [2024-07-15 08:31:27.673984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.680 [2024-07-15 08:31:27.674008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.680 [2024-07-15 08:31:27.679149] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.680 [2024-07-15 08:31:27.679241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.680 [2024-07-15 08:31:27.679278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.680 [2024-07-15 08:31:27.684476] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.680 [2024-07-15 08:31:27.684553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.680 [2024-07-15 08:31:27.684584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.680 [2024-07-15 08:31:27.689674] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.680 [2024-07-15 08:31:27.689789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.680 [2024-07-15 08:31:27.689814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.680 [2024-07-15 08:31:27.694929] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.680 [2024-07-15 08:31:27.695021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.680 [2024-07-15 08:31:27.695045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.680 [2024-07-15 08:31:27.700216] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.680 [2024-07-15 08:31:27.700296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.680 [2024-07-15 08:31:27.700321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.680 [2024-07-15 08:31:27.705417] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.680 [2024-07-15 08:31:27.705515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.680 [2024-07-15 08:31:27.705539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.680 [2024-07-15 08:31:27.710658] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.680 [2024-07-15 08:31:27.710777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.680 [2024-07-15 08:31:27.710802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.680 [2024-07-15 08:31:27.715932] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.680 [2024-07-15 08:31:27.716022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.680 [2024-07-15 08:31:27.716045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.680 [2024-07-15 08:31:27.721155] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.680 [2024-07-15 08:31:27.721239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.680 [2024-07-15 08:31:27.721271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.680 [2024-07-15 08:31:27.726379] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.680 [2024-07-15 08:31:27.726457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.680 [2024-07-15 08:31:27.726484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.680 [2024-07-15 08:31:27.731555] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.680 [2024-07-15 08:31:27.731646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.680 [2024-07-15 08:31:27.731669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.680 [2024-07-15 08:31:27.736854] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.680 [2024-07-15 08:31:27.736944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.680 [2024-07-15 08:31:27.736967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.680 [2024-07-15 08:31:27.741988] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.680 [2024-07-15 08:31:27.742077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.680 [2024-07-15 08:31:27.742100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.680 [2024-07-15 08:31:27.747206] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.680 [2024-07-15 08:31:27.747302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.680 [2024-07-15 08:31:27.747324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.680 [2024-07-15 08:31:27.752409] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.680 [2024-07-15 08:31:27.752497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.680 [2024-07-15 08:31:27.752520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.680 [2024-07-15 08:31:27.757681] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.680 [2024-07-15 08:31:27.757772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.680 [2024-07-15 08:31:27.757794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.680 [2024-07-15 08:31:27.762887] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.680 [2024-07-15 08:31:27.762960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.680 [2024-07-15 08:31:27.762982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.680 [2024-07-15 08:31:27.768112] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.680 [2024-07-15 08:31:27.768183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.680 [2024-07-15 08:31:27.768206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.680 [2024-07-15 08:31:27.773339] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.680 [2024-07-15 08:31:27.773412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.680 [2024-07-15 08:31:27.773434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.680 [2024-07-15 08:31:27.778524] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.680 [2024-07-15 08:31:27.778613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.680 [2024-07-15 08:31:27.778635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.680 [2024-07-15 08:31:27.783754] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.680 [2024-07-15 08:31:27.783840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.680 [2024-07-15 08:31:27.783862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.680 [2024-07-15 08:31:27.788912] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.680 [2024-07-15 08:31:27.789002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.680 [2024-07-15 08:31:27.789024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.680 [2024-07-15 08:31:27.794131] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.680 [2024-07-15 08:31:27.794203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.680 [2024-07-15 08:31:27.794226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.681 [2024-07-15 08:31:27.799368] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.681 [2024-07-15 08:31:27.799439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.681 [2024-07-15 08:31:27.799461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.681 [2024-07-15 08:31:27.804574] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.681 [2024-07-15 08:31:27.804656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.681 [2024-07-15 08:31:27.804679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.681 [2024-07-15 08:31:27.809839] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.681 [2024-07-15 08:31:27.809912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.681 [2024-07-15 08:31:27.809934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.681 [2024-07-15 08:31:27.815013] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.681 [2024-07-15 08:31:27.815081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.681 [2024-07-15 08:31:27.815103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.681 [2024-07-15 08:31:27.820205] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.681 [2024-07-15 08:31:27.820282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.681 [2024-07-15 08:31:27.820304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.681 [2024-07-15 08:31:27.825511] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.681 [2024-07-15 08:31:27.825583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.681 [2024-07-15 08:31:27.825605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.681 [2024-07-15 08:31:27.830658] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.681 [2024-07-15 08:31:27.830765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.681 [2024-07-15 08:31:27.830788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.681 [2024-07-15 08:31:27.835838] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.681 [2024-07-15 08:31:27.835924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.681 [2024-07-15 08:31:27.835947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.681 [2024-07-15 08:31:27.840999] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.681 [2024-07-15 08:31:27.841088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.681 [2024-07-15 08:31:27.841110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.681 [2024-07-15 08:31:27.846198] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.681 [2024-07-15 08:31:27.846293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.681 [2024-07-15 08:31:27.846316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.681 [2024-07-15 08:31:27.851437] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.681 [2024-07-15 08:31:27.851512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.681 [2024-07-15 08:31:27.851535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.940 [2024-07-15 08:31:27.856666] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.940 [2024-07-15 08:31:27.856780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.940 [2024-07-15 08:31:27.856802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.940 [2024-07-15 08:31:27.861880] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.940 [2024-07-15 08:31:27.861950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.941 [2024-07-15 08:31:27.861972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.941 [2024-07-15 08:31:27.867146] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.941 [2024-07-15 08:31:27.867222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.941 [2024-07-15 08:31:27.867248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.941 [2024-07-15 08:31:27.872402] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.941 [2024-07-15 08:31:27.872494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.941 [2024-07-15 08:31:27.872517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.941 [2024-07-15 08:31:27.877635] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.941 [2024-07-15 08:31:27.877743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.941 [2024-07-15 08:31:27.877769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.941 [2024-07-15 08:31:27.882908] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.941 [2024-07-15 08:31:27.882980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.941 [2024-07-15 08:31:27.883002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.941 [2024-07-15 08:31:27.888026] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.941 [2024-07-15 08:31:27.888132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.941 [2024-07-15 08:31:27.888154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.941 [2024-07-15 08:31:27.893239] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.941 [2024-07-15 08:31:27.893319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.941 [2024-07-15 08:31:27.893341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.941 [2024-07-15 08:31:27.898462] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.941 [2024-07-15 08:31:27.898542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.941 [2024-07-15 08:31:27.898565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.941 [2024-07-15 08:31:27.903757] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.941 [2024-07-15 08:31:27.903839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.941 [2024-07-15 08:31:27.903862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.941 [2024-07-15 08:31:27.908987] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.941 [2024-07-15 08:31:27.909074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.941 [2024-07-15 08:31:27.909097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.941 [2024-07-15 08:31:27.914257] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.941 [2024-07-15 08:31:27.914331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.941 [2024-07-15 08:31:27.914353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.941 [2024-07-15 08:31:27.919479] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.941 [2024-07-15 08:31:27.919572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.941 [2024-07-15 08:31:27.919595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.941 [2024-07-15 08:31:27.924733] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.941 [2024-07-15 08:31:27.924831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.941 [2024-07-15 08:31:27.924855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.941 [2024-07-15 08:31:27.930046] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.941 [2024-07-15 08:31:27.930122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.941 [2024-07-15 08:31:27.930145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.941 [2024-07-15 08:31:27.935259] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.941 [2024-07-15 08:31:27.935342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.941 [2024-07-15 08:31:27.935365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.941 [2024-07-15 08:31:27.940640] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.941 [2024-07-15 08:31:27.940736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.941 [2024-07-15 08:31:27.940772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.941 [2024-07-15 08:31:27.945999] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.941 [2024-07-15 08:31:27.946071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.941 [2024-07-15 08:31:27.946092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.941 [2024-07-15 08:31:27.951198] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.941 [2024-07-15 08:31:27.951295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.941 [2024-07-15 08:31:27.951318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.941 [2024-07-15 08:31:27.956386] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.941 [2024-07-15 08:31:27.956458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.941 [2024-07-15 08:31:27.956479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.941 [2024-07-15 08:31:27.961510] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.941 [2024-07-15 08:31:27.961595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.941 [2024-07-15 08:31:27.961619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.941 [2024-07-15 08:31:27.966647] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.941 [2024-07-15 08:31:27.966744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.941 [2024-07-15 08:31:27.966767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.941 [2024-07-15 08:31:27.971939] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.941 [2024-07-15 08:31:27.972010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.941 [2024-07-15 08:31:27.972032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.941 [2024-07-15 08:31:27.977065] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.941 [2024-07-15 08:31:27.977140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.941 [2024-07-15 08:31:27.977162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.941 [2024-07-15 08:31:27.982227] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.941 [2024-07-15 08:31:27.982303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.941 [2024-07-15 08:31:27.982329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.941 [2024-07-15 08:31:27.987424] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.941 [2024-07-15 08:31:27.987494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.941 [2024-07-15 08:31:27.987515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.941 [2024-07-15 08:31:27.992526] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.941 [2024-07-15 08:31:27.992592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.941 [2024-07-15 08:31:27.992614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.941 [2024-07-15 08:31:27.997623] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.941 [2024-07-15 08:31:27.997704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.941 [2024-07-15 08:31:27.997738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.941 [2024-07-15 08:31:28.002690] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.941 [2024-07-15 08:31:28.002784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.941 [2024-07-15 08:31:28.002805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.941 [2024-07-15 08:31:28.007792] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.941 [2024-07-15 08:31:28.007873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.941 [2024-07-15 08:31:28.007894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.941 [2024-07-15 08:31:28.012922] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.941 [2024-07-15 08:31:28.012994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.942 [2024-07-15 08:31:28.013015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.942 [2024-07-15 08:31:28.018022] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.942 [2024-07-15 08:31:28.018091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.942 [2024-07-15 08:31:28.018113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.942 [2024-07-15 08:31:28.023174] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.942 [2024-07-15 08:31:28.023257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.942 [2024-07-15 08:31:28.023291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.942 [2024-07-15 08:31:28.028317] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.942 [2024-07-15 08:31:28.028390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.942 [2024-07-15 08:31:28.028412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.942 [2024-07-15 08:31:28.033522] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.942 [2024-07-15 08:31:28.033596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.942 [2024-07-15 08:31:28.033618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.942 [2024-07-15 08:31:28.038913] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.942 [2024-07-15 08:31:28.038986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.942 [2024-07-15 08:31:28.039008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.942 [2024-07-15 08:31:28.044144] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.942 [2024-07-15 08:31:28.044217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.942 [2024-07-15 08:31:28.044238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.942 [2024-07-15 08:31:28.049366] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.942 [2024-07-15 08:31:28.049449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.942 [2024-07-15 08:31:28.049471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.942 [2024-07-15 08:31:28.054656] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.942 [2024-07-15 08:31:28.054757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.942 [2024-07-15 08:31:28.054780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.942 [2024-07-15 08:31:28.059896] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.942 [2024-07-15 08:31:28.059980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.942 [2024-07-15 08:31:28.060002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.942 [2024-07-15 08:31:28.065109] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.942 [2024-07-15 08:31:28.065192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.942 [2024-07-15 08:31:28.065216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.942 [2024-07-15 08:31:28.070335] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.942 [2024-07-15 08:31:28.070420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.942 [2024-07-15 08:31:28.070444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.942 [2024-07-15 08:31:28.075567] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.942 [2024-07-15 08:31:28.075641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.942 [2024-07-15 08:31:28.075662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.942 [2024-07-15 08:31:28.081011] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.942 [2024-07-15 08:31:28.081100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.942 [2024-07-15 08:31:28.081124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.942 [2024-07-15 08:31:28.086235] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.942 [2024-07-15 08:31:28.086311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.942 [2024-07-15 08:31:28.086334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.942 [2024-07-15 08:31:28.091418] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.942 [2024-07-15 08:31:28.091517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.942 [2024-07-15 08:31:28.091541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.942 [2024-07-15 08:31:28.096613] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.942 [2024-07-15 08:31:28.096687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.942 [2024-07-15 08:31:28.096709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.942 [2024-07-15 08:31:28.101840] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.942 [2024-07-15 08:31:28.101923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.942 [2024-07-15 08:31:28.101946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.942 [2024-07-15 08:31:28.107088] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.942 [2024-07-15 08:31:28.107169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.942 [2024-07-15 08:31:28.107190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.942 [2024-07-15 08:31:28.112280] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:35.942 [2024-07-15 08:31:28.112360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.942 [2024-07-15 08:31:28.112382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.201 [2024-07-15 08:31:28.117476] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.201 [2024-07-15 08:31:28.117563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.201 [2024-07-15 08:31:28.117585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.201 [2024-07-15 08:31:28.122662] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.201 [2024-07-15 08:31:28.122768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.201 [2024-07-15 08:31:28.122790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.201 [2024-07-15 08:31:28.127988] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.201 [2024-07-15 08:31:28.128067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.201 [2024-07-15 08:31:28.128089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.201 [2024-07-15 08:31:28.133295] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.201 [2024-07-15 08:31:28.133381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.201 [2024-07-15 08:31:28.133404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.201 [2024-07-15 08:31:28.138617] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.201 [2024-07-15 08:31:28.138699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.201 [2024-07-15 08:31:28.138732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.201 [2024-07-15 08:31:28.143973] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.201 [2024-07-15 08:31:28.144047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.201 [2024-07-15 08:31:28.144069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.201 [2024-07-15 08:31:28.149145] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.201 [2024-07-15 08:31:28.149241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.201 [2024-07-15 08:31:28.149282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.201 [2024-07-15 08:31:28.154421] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.201 [2024-07-15 08:31:28.154532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.201 [2024-07-15 08:31:28.154556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.201 [2024-07-15 08:31:28.159903] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.201 [2024-07-15 08:31:28.160013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.201 [2024-07-15 08:31:28.160039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.201 [2024-07-15 08:31:28.165096] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.201 [2024-07-15 08:31:28.165202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.201 [2024-07-15 08:31:28.165227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.201 [2024-07-15 08:31:28.170330] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.201 [2024-07-15 08:31:28.170432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.201 [2024-07-15 08:31:28.170460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.201 [2024-07-15 08:31:28.175605] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.201 [2024-07-15 08:31:28.175727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.201 [2024-07-15 08:31:28.175752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.201 [2024-07-15 08:31:28.180708] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.201 [2024-07-15 08:31:28.180807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.201 [2024-07-15 08:31:28.180830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.201 [2024-07-15 08:31:28.185963] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.201 [2024-07-15 08:31:28.186040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.201 [2024-07-15 08:31:28.186064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.201 [2024-07-15 08:31:28.191162] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.201 [2024-07-15 08:31:28.191239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.201 [2024-07-15 08:31:28.191261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.201 [2024-07-15 08:31:28.196461] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.201 [2024-07-15 08:31:28.196547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.201 [2024-07-15 08:31:28.196569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.201 [2024-07-15 08:31:28.201730] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.201 [2024-07-15 08:31:28.201843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.201 [2024-07-15 08:31:28.201866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.201 [2024-07-15 08:31:28.206890] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.201 [2024-07-15 08:31:28.206973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.201 [2024-07-15 08:31:28.206997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.201 [2024-07-15 08:31:28.212158] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.201 [2024-07-15 08:31:28.212231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.201 [2024-07-15 08:31:28.212254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.201 [2024-07-15 08:31:28.217367] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.201 [2024-07-15 08:31:28.217460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.201 [2024-07-15 08:31:28.217484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.201 [2024-07-15 08:31:28.222597] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.201 [2024-07-15 08:31:28.222686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.201 [2024-07-15 08:31:28.222709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.201 [2024-07-15 08:31:28.227831] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.201 [2024-07-15 08:31:28.227919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.202 [2024-07-15 08:31:28.227942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.202 [2024-07-15 08:31:28.233144] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.202 [2024-07-15 08:31:28.233244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.202 [2024-07-15 08:31:28.233266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.202 [2024-07-15 08:31:28.238512] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.202 [2024-07-15 08:31:28.238583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.202 [2024-07-15 08:31:28.238604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.202 [2024-07-15 08:31:28.243987] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.202 [2024-07-15 08:31:28.244058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.202 [2024-07-15 08:31:28.244079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.202 [2024-07-15 08:31:28.249298] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.202 [2024-07-15 08:31:28.249383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.202 [2024-07-15 08:31:28.249404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.202 [2024-07-15 08:31:28.254617] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.202 [2024-07-15 08:31:28.254702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.202 [2024-07-15 08:31:28.254724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.202 [2024-07-15 08:31:28.259898] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.202 [2024-07-15 08:31:28.259966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.202 [2024-07-15 08:31:28.259988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.202 [2024-07-15 08:31:28.265055] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.202 [2024-07-15 08:31:28.265149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.202 [2024-07-15 08:31:28.265185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.202 [2024-07-15 08:31:28.270403] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.202 [2024-07-15 08:31:28.270473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.202 [2024-07-15 08:31:28.270496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.202 [2024-07-15 08:31:28.275788] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.202 [2024-07-15 08:31:28.275857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.202 [2024-07-15 08:31:28.275879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.202 [2024-07-15 08:31:28.281081] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.202 [2024-07-15 08:31:28.281169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.202 [2024-07-15 08:31:28.281191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.202 [2024-07-15 08:31:28.286404] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.202 [2024-07-15 08:31:28.286485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.202 [2024-07-15 08:31:28.286506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.202 [2024-07-15 08:31:28.291502] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.202 [2024-07-15 08:31:28.291597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.202 [2024-07-15 08:31:28.291620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.202 [2024-07-15 08:31:28.296687] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.202 [2024-07-15 08:31:28.296804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.202 [2024-07-15 08:31:28.296836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.202 [2024-07-15 08:31:28.301949] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.202 [2024-07-15 08:31:28.302036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.202 [2024-07-15 08:31:28.302060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.202 [2024-07-15 08:31:28.307215] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.202 [2024-07-15 08:31:28.307320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.202 [2024-07-15 08:31:28.307343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.202 [2024-07-15 08:31:28.312422] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.202 [2024-07-15 08:31:28.312500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.202 [2024-07-15 08:31:28.312522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.202 [2024-07-15 08:31:28.317760] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.202 [2024-07-15 08:31:28.317845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.202 [2024-07-15 08:31:28.317873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.202 [2024-07-15 08:31:28.322958] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.202 [2024-07-15 08:31:28.323028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.202 [2024-07-15 08:31:28.323057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.202 [2024-07-15 08:31:28.328193] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.202 [2024-07-15 08:31:28.328264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.202 [2024-07-15 08:31:28.328285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.202 [2024-07-15 08:31:28.333356] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.202 [2024-07-15 08:31:28.333441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.202 [2024-07-15 08:31:28.333463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.202 [2024-07-15 08:31:28.338556] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.202 [2024-07-15 08:31:28.338642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.202 [2024-07-15 08:31:28.338664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.202 [2024-07-15 08:31:28.343875] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.202 [2024-07-15 08:31:28.343962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.202 [2024-07-15 08:31:28.343983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.202 [2024-07-15 08:31:28.349114] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.202 [2024-07-15 08:31:28.349188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.202 [2024-07-15 08:31:28.349209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.202 [2024-07-15 08:31:28.354256] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.202 [2024-07-15 08:31:28.354332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.202 [2024-07-15 08:31:28.354354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.202 [2024-07-15 08:31:28.359452] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.202 [2024-07-15 08:31:28.359543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.202 [2024-07-15 08:31:28.359566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.202 [2024-07-15 08:31:28.364600] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.202 [2024-07-15 08:31:28.364680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.202 [2024-07-15 08:31:28.364702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.202 [2024-07-15 08:31:28.369708] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.202 [2024-07-15 08:31:28.369791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.202 [2024-07-15 08:31:28.369813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.202 [2024-07-15 08:31:28.374920] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.202 [2024-07-15 08:31:28.374991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.202 [2024-07-15 08:31:28.375015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.461 [2024-07-15 08:31:28.380149] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.461 [2024-07-15 08:31:28.380220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.461 [2024-07-15 08:31:28.380242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.461 [2024-07-15 08:31:28.385350] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.461 [2024-07-15 08:31:28.385431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.461 [2024-07-15 08:31:28.385454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.461 [2024-07-15 08:31:28.390636] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.461 [2024-07-15 08:31:28.390720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.461 [2024-07-15 08:31:28.390743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.461 [2024-07-15 08:31:28.396074] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.461 [2024-07-15 08:31:28.396155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.462 [2024-07-15 08:31:28.396192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.462 [2024-07-15 08:31:28.401351] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.462 [2024-07-15 08:31:28.401426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.462 [2024-07-15 08:31:28.401448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.462 [2024-07-15 08:31:28.406458] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.462 [2024-07-15 08:31:28.406528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.462 [2024-07-15 08:31:28.406550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.462 [2024-07-15 08:31:28.411698] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.462 [2024-07-15 08:31:28.411799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.462 [2024-07-15 08:31:28.411820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.462 [2024-07-15 08:31:28.416853] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.462 [2024-07-15 08:31:28.416923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.462 [2024-07-15 08:31:28.416944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.462 [2024-07-15 08:31:28.421903] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.462 [2024-07-15 08:31:28.421985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.462 [2024-07-15 08:31:28.422006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.462 [2024-07-15 08:31:28.427012] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.462 [2024-07-15 08:31:28.427095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.462 [2024-07-15 08:31:28.427117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.462 [2024-07-15 08:31:28.432215] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.462 [2024-07-15 08:31:28.432299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.462 [2024-07-15 08:31:28.432322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.462 [2024-07-15 08:31:28.437358] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.462 [2024-07-15 08:31:28.437439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.462 [2024-07-15 08:31:28.437460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.462 [2024-07-15 08:31:28.442542] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.462 [2024-07-15 08:31:28.442624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.462 [2024-07-15 08:31:28.442646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.462 [2024-07-15 08:31:28.447761] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.462 [2024-07-15 08:31:28.447848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.462 [2024-07-15 08:31:28.447870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.462 [2024-07-15 08:31:28.452968] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.462 [2024-07-15 08:31:28.453052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.462 [2024-07-15 08:31:28.453073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.462 [2024-07-15 08:31:28.458156] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.462 [2024-07-15 08:31:28.458231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.462 [2024-07-15 08:31:28.458253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.462 [2024-07-15 08:31:28.463322] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.462 [2024-07-15 08:31:28.463393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.462 [2024-07-15 08:31:28.463415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.462 [2024-07-15 08:31:28.468461] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.462 [2024-07-15 08:31:28.468544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.462 [2024-07-15 08:31:28.468566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.462 [2024-07-15 08:31:28.473647] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.462 [2024-07-15 08:31:28.473748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.462 [2024-07-15 08:31:28.473770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.462 [2024-07-15 08:31:28.478928] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.462 [2024-07-15 08:31:28.479004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.462 [2024-07-15 08:31:28.479025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.462 [2024-07-15 08:31:28.484132] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.462 [2024-07-15 08:31:28.484218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.462 [2024-07-15 08:31:28.484240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.462 [2024-07-15 08:31:28.489295] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.462 [2024-07-15 08:31:28.489378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.462 [2024-07-15 08:31:28.489400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.462 [2024-07-15 08:31:28.494481] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.462 [2024-07-15 08:31:28.494554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.462 [2024-07-15 08:31:28.494576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.462 [2024-07-15 08:31:28.499638] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.462 [2024-07-15 08:31:28.499709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.462 [2024-07-15 08:31:28.499745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.462 [2024-07-15 08:31:28.504798] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.462 [2024-07-15 08:31:28.504889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.462 [2024-07-15 08:31:28.504911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.462 [2024-07-15 08:31:28.509970] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.462 [2024-07-15 08:31:28.510051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.462 [2024-07-15 08:31:28.510073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.462 [2024-07-15 08:31:28.515149] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.462 [2024-07-15 08:31:28.515231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.462 [2024-07-15 08:31:28.515253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.462 [2024-07-15 08:31:28.520332] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.462 [2024-07-15 08:31:28.520404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.462 [2024-07-15 08:31:28.520426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.462 [2024-07-15 08:31:28.525500] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.462 [2024-07-15 08:31:28.525593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.462 [2024-07-15 08:31:28.525615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.462 [2024-07-15 08:31:28.530681] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.462 [2024-07-15 08:31:28.530777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.462 [2024-07-15 08:31:28.530799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.462 [2024-07-15 08:31:28.535929] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.462 [2024-07-15 08:31:28.536002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.462 [2024-07-15 08:31:28.536023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.462 [2024-07-15 08:31:28.541113] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.462 [2024-07-15 08:31:28.541194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.462 [2024-07-15 08:31:28.541216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.462 [2024-07-15 08:31:28.546340] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.462 [2024-07-15 08:31:28.546411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.462 [2024-07-15 08:31:28.546432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.462 [2024-07-15 08:31:28.551578] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.462 [2024-07-15 08:31:28.551652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.462 [2024-07-15 08:31:28.551673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.462 [2024-07-15 08:31:28.556760] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.462 [2024-07-15 08:31:28.556845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.462 [2024-07-15 08:31:28.556867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.462 [2024-07-15 08:31:28.561986] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.462 [2024-07-15 08:31:28.562056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.462 [2024-07-15 08:31:28.562077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.462 [2024-07-15 08:31:28.567143] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.462 [2024-07-15 08:31:28.567222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.462 [2024-07-15 08:31:28.567243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.462 [2024-07-15 08:31:28.572349] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.462 [2024-07-15 08:31:28.572422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.462 [2024-07-15 08:31:28.572443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.462 [2024-07-15 08:31:28.577507] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.462 [2024-07-15 08:31:28.577589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.462 [2024-07-15 08:31:28.577611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.462 [2024-07-15 08:31:28.582816] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.462 [2024-07-15 08:31:28.582889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.462 [2024-07-15 08:31:28.582911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.462 [2024-07-15 08:31:28.588002] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.462 [2024-07-15 08:31:28.588074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.462 [2024-07-15 08:31:28.588095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.462 [2024-07-15 08:31:28.593125] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.462 [2024-07-15 08:31:28.593208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.462 [2024-07-15 08:31:28.593229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.462 [2024-07-15 08:31:28.598223] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.462 [2024-07-15 08:31:28.598306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.462 [2024-07-15 08:31:28.598328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.462 [2024-07-15 08:31:28.603419] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.462 [2024-07-15 08:31:28.603504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.463 [2024-07-15 08:31:28.603527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.463 [2024-07-15 08:31:28.608616] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.463 [2024-07-15 08:31:28.608698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.463 [2024-07-15 08:31:28.608733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.463 [2024-07-15 08:31:28.613858] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.463 [2024-07-15 08:31:28.613945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.463 [2024-07-15 08:31:28.613966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.463 [2024-07-15 08:31:28.619086] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.463 [2024-07-15 08:31:28.619168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.463 [2024-07-15 08:31:28.619189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.463 [2024-07-15 08:31:28.624337] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.463 [2024-07-15 08:31:28.624435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.463 [2024-07-15 08:31:28.624457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.463 [2024-07-15 08:31:28.629534] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.463 [2024-07-15 08:31:28.629616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.463 [2024-07-15 08:31:28.629638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.463 [2024-07-15 08:31:28.634730] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.463 [2024-07-15 08:31:28.634818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.463 [2024-07-15 08:31:28.634840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.721 [2024-07-15 08:31:28.639898] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.721 [2024-07-15 08:31:28.639983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.721 [2024-07-15 08:31:28.640005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.721 [2024-07-15 08:31:28.645092] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.721 [2024-07-15 08:31:28.645187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.721 [2024-07-15 08:31:28.645210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.721 [2024-07-15 08:31:28.650326] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.721 [2024-07-15 08:31:28.650409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.721 [2024-07-15 08:31:28.650431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.721 [2024-07-15 08:31:28.655610] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.721 [2024-07-15 08:31:28.655689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.721 [2024-07-15 08:31:28.655712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.721 [2024-07-15 08:31:28.660851] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.721 [2024-07-15 08:31:28.660923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.721 [2024-07-15 08:31:28.660946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.721 [2024-07-15 08:31:28.666107] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.721 [2024-07-15 08:31:28.666191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.722 [2024-07-15 08:31:28.666214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.722 [2024-07-15 08:31:28.671370] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.722 [2024-07-15 08:31:28.671449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.722 [2024-07-15 08:31:28.671474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.722 [2024-07-15 08:31:28.676625] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.722 [2024-07-15 08:31:28.676713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.722 [2024-07-15 08:31:28.676749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.722 [2024-07-15 08:31:28.681913] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.722 [2024-07-15 08:31:28.681990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.722 [2024-07-15 08:31:28.682013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.722 [2024-07-15 08:31:28.687084] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.722 [2024-07-15 08:31:28.687160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.722 [2024-07-15 08:31:28.687184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.722 [2024-07-15 08:31:28.692360] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.722 [2024-07-15 08:31:28.692435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.722 [2024-07-15 08:31:28.692458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.722 [2024-07-15 08:31:28.697623] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.722 [2024-07-15 08:31:28.697732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.722 [2024-07-15 08:31:28.697782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.722 [2024-07-15 08:31:28.702959] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.722 [2024-07-15 08:31:28.703047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.722 [2024-07-15 08:31:28.703070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.722 [2024-07-15 08:31:28.708233] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.722 [2024-07-15 08:31:28.708322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.722 [2024-07-15 08:31:28.708344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.722 [2024-07-15 08:31:28.713473] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230d500) with pdu=0x2000190fef90 00:18:36.722 [2024-07-15 08:31:28.713563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.722 [2024-07-15 08:31:28.713586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.722 00:18:36.722 Latency(us) 00:18:36.722 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:36.722 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:18:36.722 nvme0n1 : 2.00 5864.78 733.10 0.00 0.00 2722.09 2055.45 6642.97 00:18:36.722 =================================================================================================================== 00:18:36.722 Total : 5864.78 733.10 0.00 0.00 2722.09 2055.45 6642.97 00:18:36.722 0 00:18:36.722 08:31:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:36.722 08:31:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:36.722 08:31:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:36.722 | .driver_specific 00:18:36.722 | .nvme_error 00:18:36.722 | .status_code 00:18:36.722 | .command_transient_transport_error' 00:18:36.722 08:31:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:36.981 08:31:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 378 > 0 )) 00:18:36.981 08:31:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80826 00:18:36.981 08:31:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 80826 ']' 00:18:36.981 08:31:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 80826 00:18:36.981 08:31:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:18:36.981 08:31:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:36.981 08:31:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80826 00:18:36.981 killing process with pid 80826 00:18:36.981 Received shutdown signal, test time was about 2.000000 seconds 00:18:36.981 00:18:36.981 Latency(us) 00:18:36.981 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:36.981 =================================================================================================================== 00:18:36.981 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:36.981 08:31:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:36.981 08:31:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:36.981 08:31:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80826' 00:18:36.981 08:31:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 80826 00:18:36.981 08:31:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 80826 00:18:37.240 08:31:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 80616 00:18:37.240 08:31:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 80616 ']' 00:18:37.240 08:31:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 80616 00:18:37.240 08:31:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:18:37.240 08:31:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:37.240 08:31:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80616 00:18:37.240 killing process with pid 80616 00:18:37.240 08:31:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:37.240 08:31:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:37.240 08:31:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80616' 00:18:37.240 08:31:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 80616 00:18:37.240 08:31:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 80616 00:18:37.499 00:18:37.499 real 0m19.258s 00:18:37.499 user 0m36.986s 00:18:37.499 sys 0m5.461s 00:18:37.499 08:31:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:37.499 ************************************ 00:18:37.499 END TEST nvmf_digest_error 00:18:37.499 ************************************ 00:18:37.499 08:31:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:37.499 08:31:29 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:18:37.499 08:31:29 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:18:37.499 08:31:29 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:18:37.499 08:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:37.499 08:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:18:37.499 08:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:37.499 08:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:18:37.499 08:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:37.499 08:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:37.499 rmmod nvme_tcp 00:18:37.499 rmmod nvme_fabrics 00:18:37.757 rmmod nvme_keyring 00:18:37.757 08:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:37.757 08:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:18:37.757 08:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:18:37.757 Process with pid 80616 is not found 00:18:37.757 08:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 80616 ']' 00:18:37.757 08:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 80616 00:18:37.757 08:31:29 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 80616 ']' 00:18:37.757 08:31:29 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 80616 00:18:37.757 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (80616) - No such process 00:18:37.757 08:31:29 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 80616 is not found' 00:18:37.757 08:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:37.757 08:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:37.757 08:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:37.757 08:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:37.757 08:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:37.757 08:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:37.757 08:31:29 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:37.757 08:31:29 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:37.757 08:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:37.757 00:18:37.757 real 0m39.572s 00:18:37.757 user 1m14.980s 00:18:37.757 sys 0m11.140s 00:18:37.757 08:31:29 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:37.757 ************************************ 00:18:37.757 END TEST nvmf_digest 00:18:37.757 ************************************ 00:18:37.757 08:31:29 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:18:37.757 08:31:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:37.757 08:31:29 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:18:37.757 08:31:29 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 1 -eq 1 ]] 00:18:37.757 08:31:29 nvmf_tcp -- nvmf/nvmf.sh@117 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:18:37.757 08:31:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:37.757 08:31:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:37.757 08:31:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:37.757 ************************************ 00:18:37.757 START TEST nvmf_host_multipath 00:18:37.757 ************************************ 00:18:37.757 08:31:29 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:18:37.757 * Looking for test storage... 00:18:37.757 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:37.757 08:31:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:37.757 08:31:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:18:37.757 08:31:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:37.757 08:31:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:37.757 08:31:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:37.757 08:31:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:37.757 08:31:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:37.757 08:31:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:37.757 08:31:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:37.757 08:31:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:37.757 08:31:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:37.757 08:31:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:37.757 08:31:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:18:37.757 08:31:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:18:37.757 08:31:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:37.757 08:31:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:37.757 08:31:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:37.757 08:31:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:37.757 08:31:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:37.757 08:31:29 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:37.757 08:31:29 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:37.757 08:31:29 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:37.757 08:31:29 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.757 08:31:29 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.757 08:31:29 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.757 08:31:29 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:18:37.757 08:31:29 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.757 08:31:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@47 -- # : 0 00:18:37.757 08:31:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:37.757 08:31:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:37.757 08:31:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:37.757 08:31:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:37.757 08:31:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:37.758 08:31:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:37.758 08:31:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:37.758 08:31:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:37.758 08:31:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:37.758 08:31:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:37.758 08:31:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:37.758 08:31:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:18:37.758 08:31:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:37.758 08:31:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:18:37.758 08:31:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:18:37.758 08:31:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:37.758 08:31:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:37.758 08:31:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:37.758 08:31:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:37.758 08:31:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:37.758 08:31:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:37.758 08:31:29 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:37.758 08:31:29 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:37.758 08:31:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:37.758 08:31:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:37.758 08:31:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:37.758 08:31:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:37.758 08:31:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:37.758 08:31:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:37.758 08:31:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:37.758 08:31:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:37.758 08:31:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:37.758 08:31:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:37.758 08:31:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:37.758 08:31:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:37.758 08:31:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:38.015 08:31:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:38.015 08:31:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:38.015 08:31:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:38.015 08:31:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:38.015 08:31:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:38.015 08:31:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:38.015 08:31:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:38.015 Cannot find device "nvmf_tgt_br" 00:18:38.015 08:31:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # true 00:18:38.015 08:31:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:38.015 Cannot find device "nvmf_tgt_br2" 00:18:38.015 08:31:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # true 00:18:38.015 08:31:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:38.015 08:31:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:38.015 Cannot find device "nvmf_tgt_br" 00:18:38.015 08:31:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # true 00:18:38.015 08:31:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:38.015 Cannot find device "nvmf_tgt_br2" 00:18:38.015 08:31:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # true 00:18:38.015 08:31:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:38.015 08:31:30 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:38.015 08:31:30 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:38.015 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:38.015 08:31:30 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:18:38.015 08:31:30 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:38.015 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:38.015 08:31:30 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:18:38.015 08:31:30 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:38.015 08:31:30 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:38.015 08:31:30 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:38.015 08:31:30 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:38.016 08:31:30 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:38.016 08:31:30 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:38.016 08:31:30 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:38.016 08:31:30 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:38.016 08:31:30 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:38.016 08:31:30 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:38.275 08:31:30 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:38.275 08:31:30 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:38.275 08:31:30 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:38.275 08:31:30 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:38.275 08:31:30 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:38.275 08:31:30 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:38.275 08:31:30 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:38.275 08:31:30 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:38.275 08:31:30 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:38.275 08:31:30 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:38.275 08:31:30 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:38.275 08:31:30 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:38.275 08:31:30 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:38.275 08:31:30 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:38.275 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:38.275 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.126 ms 00:18:38.275 00:18:38.275 --- 10.0.0.2 ping statistics --- 00:18:38.275 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:38.275 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:18:38.275 08:31:30 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:38.275 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:38.275 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:18:38.275 00:18:38.275 --- 10.0.0.3 ping statistics --- 00:18:38.275 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:38.275 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:18:38.275 08:31:30 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:38.275 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:38.275 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:18:38.275 00:18:38.275 --- 10.0.0.1 ping statistics --- 00:18:38.275 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:38.275 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:18:38.275 08:31:30 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:38.275 08:31:30 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@433 -- # return 0 00:18:38.275 08:31:30 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:38.275 08:31:30 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:38.275 08:31:30 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:38.275 08:31:30 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:38.275 08:31:30 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:38.275 08:31:30 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:38.275 08:31:30 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:38.275 08:31:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:18:38.275 08:31:30 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:38.275 08:31:30 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:38.275 08:31:30 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:38.275 08:31:30 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@481 -- # nvmfpid=81098 00:18:38.275 08:31:30 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@482 -- # waitforlisten 81098 00:18:38.275 08:31:30 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:38.275 08:31:30 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@829 -- # '[' -z 81098 ']' 00:18:38.275 08:31:30 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:38.275 08:31:30 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:38.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:38.275 08:31:30 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:38.275 08:31:30 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:38.275 08:31:30 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:38.275 [2024-07-15 08:31:30.393335] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:38.275 [2024-07-15 08:31:30.393647] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:38.534 [2024-07-15 08:31:30.536610] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:38.534 [2024-07-15 08:31:30.696666] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:38.534 [2024-07-15 08:31:30.697021] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:38.534 [2024-07-15 08:31:30.697202] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:38.534 [2024-07-15 08:31:30.697500] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:38.534 [2024-07-15 08:31:30.697663] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:38.534 [2024-07-15 08:31:30.697961] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:38.534 [2024-07-15 08:31:30.697973] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:38.806 [2024-07-15 08:31:30.776057] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:39.373 08:31:31 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:39.373 08:31:31 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@862 -- # return 0 00:18:39.373 08:31:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:39.373 08:31:31 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:39.373 08:31:31 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:39.373 08:31:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:39.373 08:31:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=81098 00:18:39.373 08:31:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:39.631 [2024-07-15 08:31:31.782135] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:39.889 08:31:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:40.147 Malloc0 00:18:40.148 08:31:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:18:40.406 08:31:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:40.664 08:31:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:40.922 [2024-07-15 08:31:32.902795] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:40.922 08:31:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:41.192 [2024-07-15 08:31:33.227288] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:41.192 08:31:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:18:41.192 08:31:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=81154 00:18:41.192 08:31:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:41.192 08:31:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 81154 /var/tmp/bdevperf.sock 00:18:41.192 08:31:33 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@829 -- # '[' -z 81154 ']' 00:18:41.192 08:31:33 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:41.192 08:31:33 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:41.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:41.192 08:31:33 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:41.192 08:31:33 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:41.192 08:31:33 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:42.128 08:31:34 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:42.128 08:31:34 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@862 -- # return 0 00:18:42.128 08:31:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:42.386 08:31:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:18:42.645 Nvme0n1 00:18:42.904 08:31:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:18:43.162 Nvme0n1 00:18:43.162 08:31:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:18:43.162 08:31:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:18:44.099 08:31:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:18:44.099 08:31:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:44.357 08:31:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:44.615 08:31:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:18:44.615 08:31:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81199 00:18:44.615 08:31:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81098 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:44.615 08:31:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:51.273 08:31:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:51.273 08:31:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:51.273 08:31:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:51.273 08:31:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:51.273 Attaching 4 probes... 00:18:51.273 @path[10.0.0.2, 4421]: 16296 00:18:51.273 @path[10.0.0.2, 4421]: 16824 00:18:51.273 @path[10.0.0.2, 4421]: 16816 00:18:51.273 @path[10.0.0.2, 4421]: 16749 00:18:51.273 @path[10.0.0.2, 4421]: 16975 00:18:51.273 08:31:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:51.273 08:31:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:51.273 08:31:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:51.273 08:31:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:51.273 08:31:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:51.273 08:31:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:51.273 08:31:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81199 00:18:51.273 08:31:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:51.273 08:31:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:18:51.273 08:31:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:51.273 08:31:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:18:51.532 08:31:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:18:51.532 08:31:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81316 00:18:51.532 08:31:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81098 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:51.532 08:31:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:58.101 08:31:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:58.101 08:31:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:18:58.101 08:31:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:18:58.101 08:31:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:58.101 Attaching 4 probes... 00:18:58.101 @path[10.0.0.2, 4420]: 17366 00:18:58.101 @path[10.0.0.2, 4420]: 17522 00:18:58.101 @path[10.0.0.2, 4420]: 17498 00:18:58.101 @path[10.0.0.2, 4420]: 14668 00:18:58.101 @path[10.0.0.2, 4420]: 15850 00:18:58.101 08:31:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:58.101 08:31:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:58.101 08:31:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:58.101 08:31:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:18:58.101 08:31:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:18:58.101 08:31:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:18:58.101 08:31:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81316 00:18:58.101 08:31:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:58.101 08:31:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:18:58.101 08:31:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:18:58.101 08:31:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:58.361 08:31:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:18:58.361 08:31:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81424 00:18:58.361 08:31:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81098 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:58.361 08:31:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:04.933 08:31:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:04.933 08:31:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:19:04.933 08:31:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:19:04.933 08:31:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:04.933 Attaching 4 probes... 00:19:04.933 @path[10.0.0.2, 4421]: 13574 00:19:04.933 @path[10.0.0.2, 4421]: 16225 00:19:04.933 @path[10.0.0.2, 4421]: 16095 00:19:04.933 @path[10.0.0.2, 4421]: 16250 00:19:04.933 @path[10.0.0.2, 4421]: 16499 00:19:04.933 08:31:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:19:04.933 08:31:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:04.933 08:31:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:04.933 08:31:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:19:04.933 08:31:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:19:04.933 08:31:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:19:04.933 08:31:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81424 00:19:04.933 08:31:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:04.933 08:31:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:19:04.933 08:31:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:19:04.933 08:31:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:19:05.191 08:31:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:19:05.191 08:31:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81542 00:19:05.191 08:31:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:05.191 08:31:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81098 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:11.749 08:32:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:11.749 08:32:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:19:11.749 08:32:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:19:11.749 08:32:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:11.749 Attaching 4 probes... 00:19:11.749 00:19:11.749 00:19:11.749 00:19:11.749 00:19:11.749 00:19:11.749 08:32:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:11.749 08:32:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:19:11.749 08:32:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:11.749 08:32:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:19:11.749 08:32:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:19:11.749 08:32:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:19:11.749 08:32:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81542 00:19:11.749 08:32:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:11.749 08:32:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:19:11.749 08:32:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:19:11.749 08:32:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:19:12.007 08:32:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:19:12.007 08:32:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81655 00:19:12.007 08:32:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81098 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:12.007 08:32:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:18.570 08:32:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:18.570 08:32:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:19:18.570 08:32:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:19:18.570 08:32:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:18.570 Attaching 4 probes... 00:19:18.570 @path[10.0.0.2, 4421]: 17071 00:19:18.570 @path[10.0.0.2, 4421]: 17492 00:19:18.570 @path[10.0.0.2, 4421]: 17405 00:19:18.570 @path[10.0.0.2, 4421]: 17352 00:19:18.570 @path[10.0.0.2, 4421]: 17367 00:19:18.570 08:32:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:19:18.570 08:32:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:18.570 08:32:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:18.570 08:32:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:19:18.570 08:32:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:19:18.570 08:32:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:19:18.570 08:32:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81655 00:19:18.570 08:32:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:18.570 08:32:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:18.570 [2024-07-15 08:32:10.615971] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe8f70 is same with the state(5) to be set 00:19:18.570 08:32:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:19:19.529 08:32:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:19:19.529 08:32:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81777 00:19:19.529 08:32:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81098 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:19.529 08:32:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:26.097 08:32:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:26.097 08:32:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:19:26.097 08:32:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:19:26.097 08:32:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:26.097 Attaching 4 probes... 00:19:26.097 @path[10.0.0.2, 4420]: 15896 00:19:26.097 @path[10.0.0.2, 4420]: 16851 00:19:26.097 @path[10.0.0.2, 4420]: 16783 00:19:26.097 @path[10.0.0.2, 4420]: 16827 00:19:26.097 @path[10.0.0.2, 4420]: 16770 00:19:26.097 08:32:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:26.097 08:32:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:19:26.097 08:32:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:26.097 08:32:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:19:26.097 08:32:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:19:26.097 08:32:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:19:26.097 08:32:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81777 00:19:26.097 08:32:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:26.097 08:32:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:26.097 [2024-07-15 08:32:18.247150] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:26.097 08:32:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:19:26.355 08:32:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:19:32.912 08:32:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:19:32.912 08:32:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81953 00:19:32.912 08:32:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81098 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:32.912 08:32:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:39.484 08:32:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:19:39.484 08:32:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:39.484 08:32:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:19:39.484 08:32:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:39.484 Attaching 4 probes... 00:19:39.484 @path[10.0.0.2, 4421]: 16972 00:19:39.484 @path[10.0.0.2, 4421]: 17352 00:19:39.484 @path[10.0.0.2, 4421]: 17277 00:19:39.484 @path[10.0.0.2, 4421]: 17371 00:19:39.484 @path[10.0.0.2, 4421]: 15778 00:19:39.484 08:32:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:19:39.484 08:32:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:39.484 08:32:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:39.484 08:32:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:19:39.484 08:32:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:19:39.484 08:32:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:19:39.484 08:32:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81953 00:19:39.484 08:32:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:39.484 08:32:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 81154 00:19:39.484 08:32:30 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@948 -- # '[' -z 81154 ']' 00:19:39.484 08:32:30 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # kill -0 81154 00:19:39.484 08:32:30 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # uname 00:19:39.484 08:32:30 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:39.484 08:32:30 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81154 00:19:39.484 killing process with pid 81154 00:19:39.484 08:32:30 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:39.484 08:32:30 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:39.484 08:32:30 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81154' 00:19:39.484 08:32:30 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@967 -- # kill 81154 00:19:39.484 08:32:30 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@972 -- # wait 81154 00:19:39.484 Connection closed with partial response: 00:19:39.484 00:19:39.484 00:19:39.484 08:32:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 81154 00:19:39.484 08:32:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:39.484 [2024-07-15 08:31:33.303787] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:19:39.484 [2024-07-15 08:31:33.303916] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81154 ] 00:19:39.485 [2024-07-15 08:31:33.442960] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:39.485 [2024-07-15 08:31:33.591578] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:39.485 [2024-07-15 08:31:33.665449] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:39.485 Running I/O for 90 seconds... 00:19:39.485 [2024-07-15 08:31:43.522574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:39304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.485 [2024-07-15 08:31:43.522659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:39.485 [2024-07-15 08:31:43.522733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:39312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.485 [2024-07-15 08:31:43.522756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:39.485 [2024-07-15 08:31:43.522780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:39320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.485 [2024-07-15 08:31:43.522797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:39.485 [2024-07-15 08:31:43.522819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:39328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.485 [2024-07-15 08:31:43.522834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:39.485 [2024-07-15 08:31:43.522855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:39336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.485 [2024-07-15 08:31:43.522870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:39.485 [2024-07-15 08:31:43.522890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:39344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.485 [2024-07-15 08:31:43.522906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:39.485 [2024-07-15 08:31:43.522927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:39352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.485 [2024-07-15 08:31:43.522942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:39.485 [2024-07-15 08:31:43.522963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:39360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.485 [2024-07-15 08:31:43.522978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:39.485 [2024-07-15 08:31:43.523000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.485 [2024-07-15 08:31:43.523015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:39.485 [2024-07-15 08:31:43.523036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:39376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.485 [2024-07-15 08:31:43.523050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:39.485 [2024-07-15 08:31:43.523071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:39384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.485 [2024-07-15 08:31:43.523109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:39.485 [2024-07-15 08:31:43.523132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:39392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.485 [2024-07-15 08:31:43.523148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:39.485 [2024-07-15 08:31:43.523168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:39400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.485 [2024-07-15 08:31:43.523183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:39.485 [2024-07-15 08:31:43.523204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:39408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.485 [2024-07-15 08:31:43.523220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:39.485 [2024-07-15 08:31:43.523240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:39416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.485 [2024-07-15 08:31:43.523255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:39.485 [2024-07-15 08:31:43.523287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:39424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.485 [2024-07-15 08:31:43.523303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:39.485 [2024-07-15 08:31:43.523324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:39432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.485 [2024-07-15 08:31:43.523339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:39.485 [2024-07-15 08:31:43.523361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:39440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.485 [2024-07-15 08:31:43.523376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:39.485 [2024-07-15 08:31:43.523397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.485 [2024-07-15 08:31:43.523412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:39.485 [2024-07-15 08:31:43.523433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:39456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.485 [2024-07-15 08:31:43.523447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:39.485 [2024-07-15 08:31:43.523468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:39464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.485 [2024-07-15 08:31:43.523483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:39.485 [2024-07-15 08:31:43.523503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:39472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.485 [2024-07-15 08:31:43.523518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:39.485 [2024-07-15 08:31:43.523539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:39480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.485 [2024-07-15 08:31:43.523562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:39.485 [2024-07-15 08:31:43.523585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:39488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.485 [2024-07-15 08:31:43.523601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:39.485 [2024-07-15 08:31:43.523627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:38856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.485 [2024-07-15 08:31:43.523642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:39.485 [2024-07-15 08:31:43.523663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:38864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.485 [2024-07-15 08:31:43.523678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:39.485 [2024-07-15 08:31:43.523699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:38872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.485 [2024-07-15 08:31:43.523714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:39.485 [2024-07-15 08:31:43.523748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:38880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.485 [2024-07-15 08:31:43.523764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:39.485 [2024-07-15 08:31:43.523785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:38888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.485 [2024-07-15 08:31:43.523800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:39.485 [2024-07-15 08:31:43.523822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:38896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.485 [2024-07-15 08:31:43.523837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:39.485 [2024-07-15 08:31:43.523858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:38904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.485 [2024-07-15 08:31:43.523873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:39.485 [2024-07-15 08:31:43.523894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:38912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.485 [2024-07-15 08:31:43.523910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:39.485 [2024-07-15 08:31:43.523948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:39496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.485 [2024-07-15 08:31:43.523967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:39.485 [2024-07-15 08:31:43.523992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:39504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.485 [2024-07-15 08:31:43.524008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:39.485 [2024-07-15 08:31:43.524029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.485 [2024-07-15 08:31:43.524045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:39.485 [2024-07-15 08:31:43.524076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:39520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.485 [2024-07-15 08:31:43.524092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:39.485 [2024-07-15 08:31:43.524113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:39528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.485 [2024-07-15 08:31:43.524129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:39.485 [2024-07-15 08:31:43.524149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:39536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.485 [2024-07-15 08:31:43.524165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:39.485 [2024-07-15 08:31:43.524186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:39544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.486 [2024-07-15 08:31:43.524200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:39.486 [2024-07-15 08:31:43.524222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:39552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.486 [2024-07-15 08:31:43.524237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:39.486 [2024-07-15 08:31:43.524262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:39560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.486 [2024-07-15 08:31:43.524278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:39.486 [2024-07-15 08:31:43.524299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:39568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.486 [2024-07-15 08:31:43.524315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:39.486 [2024-07-15 08:31:43.524337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:39576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.486 [2024-07-15 08:31:43.524352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:39.486 [2024-07-15 08:31:43.524372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:39584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.486 [2024-07-15 08:31:43.524387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:39.486 [2024-07-15 08:31:43.524409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:39592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.486 [2024-07-15 08:31:43.524424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:39.486 [2024-07-15 08:31:43.524445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.486 [2024-07-15 08:31:43.524460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:39.486 [2024-07-15 08:31:43.524481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:39608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.486 [2024-07-15 08:31:43.524496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:39.486 [2024-07-15 08:31:43.524525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:39616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.486 [2024-07-15 08:31:43.524540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:39.486 [2024-07-15 08:31:43.524561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:38920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.486 [2024-07-15 08:31:43.524576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:39.486 [2024-07-15 08:31:43.524598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:38928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.486 [2024-07-15 08:31:43.524613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:39.486 [2024-07-15 08:31:43.524635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:38936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.486 [2024-07-15 08:31:43.524650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:39.486 [2024-07-15 08:31:43.524671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:38944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.486 [2024-07-15 08:31:43.524686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:39.486 [2024-07-15 08:31:43.524707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:38952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.486 [2024-07-15 08:31:43.524739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:39.486 [2024-07-15 08:31:43.524763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:38960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.486 [2024-07-15 08:31:43.524778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:39.486 [2024-07-15 08:31:43.524799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:38968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.486 [2024-07-15 08:31:43.524814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:39.486 [2024-07-15 08:31:43.524835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:38976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.486 [2024-07-15 08:31:43.524850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:39.486 [2024-07-15 08:31:43.524871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:38984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.486 [2024-07-15 08:31:43.524886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:39.486 [2024-07-15 08:31:43.524912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:38992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.486 [2024-07-15 08:31:43.524927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:39.486 [2024-07-15 08:31:43.524948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:39000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.486 [2024-07-15 08:31:43.524970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:39.486 [2024-07-15 08:31:43.525000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:39008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.486 [2024-07-15 08:31:43.525025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:39.486 [2024-07-15 08:31:43.525048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:39016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.486 [2024-07-15 08:31:43.525064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:39.486 [2024-07-15 08:31:43.525090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:39024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.486 [2024-07-15 08:31:43.525105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:39.486 [2024-07-15 08:31:43.525126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:39032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.486 [2024-07-15 08:31:43.525141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:39.486 [2024-07-15 08:31:43.525162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.486 [2024-07-15 08:31:43.525178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:39.486 [2024-07-15 08:31:43.525199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:39048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.486 [2024-07-15 08:31:43.525213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:39.486 [2024-07-15 08:31:43.525236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:39056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.486 [2024-07-15 08:31:43.525252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:39.486 [2024-07-15 08:31:43.525274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:39064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.486 [2024-07-15 08:31:43.525288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:39.486 [2024-07-15 08:31:43.525309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:39072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.486 [2024-07-15 08:31:43.525325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:39.486 [2024-07-15 08:31:43.525347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:39080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.486 [2024-07-15 08:31:43.525362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:39.486 [2024-07-15 08:31:43.525383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:39088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.486 [2024-07-15 08:31:43.525398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:39.486 [2024-07-15 08:31:43.525419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:39096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.486 [2024-07-15 08:31:43.525434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:39.486 [2024-07-15 08:31:43.525456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:39104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.486 [2024-07-15 08:31:43.525477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:39.486 [2024-07-15 08:31:43.525499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:39112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.486 [2024-07-15 08:31:43.525514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:39.486 [2024-07-15 08:31:43.525545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:39120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.486 [2024-07-15 08:31:43.525561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:39.486 [2024-07-15 08:31:43.525582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:39128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.486 [2024-07-15 08:31:43.525597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:39.486 [2024-07-15 08:31:43.525618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:39136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.486 [2024-07-15 08:31:43.525633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:39.486 [2024-07-15 08:31:43.525654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:39144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.486 [2024-07-15 08:31:43.525669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:39.486 [2024-07-15 08:31:43.525691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:39152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.486 [2024-07-15 08:31:43.525705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:39.486 [2024-07-15 08:31:43.525739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:39160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.486 [2024-07-15 08:31:43.525757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:39.486 [2024-07-15 08:31:43.525778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:39168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.486 [2024-07-15 08:31:43.525793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:39.486 [2024-07-15 08:31:43.525819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:39624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.487 [2024-07-15 08:31:43.525836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:39.487 [2024-07-15 08:31:43.525858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:39632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.487 [2024-07-15 08:31:43.525873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:39.487 [2024-07-15 08:31:43.525895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.487 [2024-07-15 08:31:43.525910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:39.487 [2024-07-15 08:31:43.525931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.487 [2024-07-15 08:31:43.525967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:39.487 [2024-07-15 08:31:43.525991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:39656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.487 [2024-07-15 08:31:43.526006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:39.487 [2024-07-15 08:31:43.526028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:39664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.487 [2024-07-15 08:31:43.526043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:39.487 [2024-07-15 08:31:43.526072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:39672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.487 [2024-07-15 08:31:43.526088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.487 [2024-07-15 08:31:43.526109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:39680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.487 [2024-07-15 08:31:43.526124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:39.487 [2024-07-15 08:31:43.526145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:39688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.487 [2024-07-15 08:31:43.526160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:39.487 [2024-07-15 08:31:43.526186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:39696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.487 [2024-07-15 08:31:43.526202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:39.487 [2024-07-15 08:31:43.526223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:39176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.487 [2024-07-15 08:31:43.526238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:39.487 [2024-07-15 08:31:43.526259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:39184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.487 [2024-07-15 08:31:43.526274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:39.487 [2024-07-15 08:31:43.526295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:39192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.487 [2024-07-15 08:31:43.526310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:39.487 [2024-07-15 08:31:43.526331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:39200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.487 [2024-07-15 08:31:43.526347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:39.487 [2024-07-15 08:31:43.526368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:39208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.487 [2024-07-15 08:31:43.526383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:39.487 [2024-07-15 08:31:43.526404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:39216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.487 [2024-07-15 08:31:43.526419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:39.487 [2024-07-15 08:31:43.526453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:39224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.487 [2024-07-15 08:31:43.526470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:39.487 [2024-07-15 08:31:43.526492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:39232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.487 [2024-07-15 08:31:43.526507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:39.487 [2024-07-15 08:31:43.526528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:39704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.487 [2024-07-15 08:31:43.526543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:39.487 [2024-07-15 08:31:43.526564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:39712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.487 [2024-07-15 08:31:43.526579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:39.487 [2024-07-15 08:31:43.526600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:39720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.487 [2024-07-15 08:31:43.526615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:39.487 [2024-07-15 08:31:43.526636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:39728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.487 [2024-07-15 08:31:43.526652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:39.487 [2024-07-15 08:31:43.526673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:39736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.487 [2024-07-15 08:31:43.526689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:39.487 [2024-07-15 08:31:43.526709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:39744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.487 [2024-07-15 08:31:43.526735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:39.487 [2024-07-15 08:31:43.526758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:39752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.487 [2024-07-15 08:31:43.526774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:39.487 [2024-07-15 08:31:43.526800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.487 [2024-07-15 08:31:43.526815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:39.487 [2024-07-15 08:31:43.526836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:39768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.487 [2024-07-15 08:31:43.526851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:39.487 [2024-07-15 08:31:43.526872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:39776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.487 [2024-07-15 08:31:43.526887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:39.487 [2024-07-15 08:31:43.526920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:39784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.487 [2024-07-15 08:31:43.526937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:39.487 [2024-07-15 08:31:43.526958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:39792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.487 [2024-07-15 08:31:43.526973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:39.487 [2024-07-15 08:31:43.526995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:39800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.487 [2024-07-15 08:31:43.527010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:39.487 [2024-07-15 08:31:43.527031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:39808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.487 [2024-07-15 08:31:43.527046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:39.487 [2024-07-15 08:31:43.527068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:39240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.487 [2024-07-15 08:31:43.527084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:39.487 [2024-07-15 08:31:43.527105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:39248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.487 [2024-07-15 08:31:43.527120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:39.487 [2024-07-15 08:31:43.527141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:39256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.487 [2024-07-15 08:31:43.527156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:39.487 [2024-07-15 08:31:43.527178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:39264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.487 [2024-07-15 08:31:43.527193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:39.487 [2024-07-15 08:31:43.527214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:39272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.487 [2024-07-15 08:31:43.527229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:39.487 [2024-07-15 08:31:43.527250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.487 [2024-07-15 08:31:43.527276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:39.487 [2024-07-15 08:31:43.527299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:39288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.487 [2024-07-15 08:31:43.527314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:39.487 [2024-07-15 08:31:43.528787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:39296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.487 [2024-07-15 08:31:43.528818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:39.487 [2024-07-15 08:31:43.528847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.487 [2024-07-15 08:31:43.528890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:39.487 [2024-07-15 08:31:43.528916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:39824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.487 [2024-07-15 08:31:43.528932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:39.488 [2024-07-15 08:31:43.528953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.488 [2024-07-15 08:31:43.528968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:39.488 [2024-07-15 08:31:43.528989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:39840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.488 [2024-07-15 08:31:43.529005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:39.488 [2024-07-15 08:31:43.529026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:39848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.488 [2024-07-15 08:31:43.529042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:39.488 [2024-07-15 08:31:43.529062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:39856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.488 [2024-07-15 08:31:43.529077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:39.488 [2024-07-15 08:31:43.529099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:39864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.488 [2024-07-15 08:31:43.529114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:39.488 [2024-07-15 08:31:43.529157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:39872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.488 [2024-07-15 08:31:43.529177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:39.488 [2024-07-15 08:31:50.123401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:82832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.488 [2024-07-15 08:31:50.123477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:39.488 [2024-07-15 08:31:50.123537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:82840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.488 [2024-07-15 08:31:50.123559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:39.488 [2024-07-15 08:31:50.123581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:82848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.488 [2024-07-15 08:31:50.123597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.488 [2024-07-15 08:31:50.123618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:82856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.488 [2024-07-15 08:31:50.123634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:39.488 [2024-07-15 08:31:50.123655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:82864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.488 [2024-07-15 08:31:50.123700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:39.488 [2024-07-15 08:31:50.123783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:82872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.488 [2024-07-15 08:31:50.123800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:39.488 [2024-07-15 08:31:50.123821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:82880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.488 [2024-07-15 08:31:50.123836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:39.488 [2024-07-15 08:31:50.123857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:82888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.488 [2024-07-15 08:31:50.123871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:39.488 [2024-07-15 08:31:50.123892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:82320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.488 [2024-07-15 08:31:50.123907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:39.488 [2024-07-15 08:31:50.123928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:82328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.488 [2024-07-15 08:31:50.123943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:39.488 [2024-07-15 08:31:50.123963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:82336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.488 [2024-07-15 08:31:50.123978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:39.488 [2024-07-15 08:31:50.123999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:82344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.488 [2024-07-15 08:31:50.124015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:39.488 [2024-07-15 08:31:50.124036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:82352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.488 [2024-07-15 08:31:50.124050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:39.488 [2024-07-15 08:31:50.124071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:82360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.488 [2024-07-15 08:31:50.124085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:39.488 [2024-07-15 08:31:50.124106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:82368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.488 [2024-07-15 08:31:50.124121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:39.488 [2024-07-15 08:31:50.124142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:82376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.488 [2024-07-15 08:31:50.124156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:39.488 [2024-07-15 08:31:50.124177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:82384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.488 [2024-07-15 08:31:50.124192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:39.488 [2024-07-15 08:31:50.124223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:82392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.488 [2024-07-15 08:31:50.124239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:39.488 [2024-07-15 08:31:50.124259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.488 [2024-07-15 08:31:50.124275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:39.488 [2024-07-15 08:31:50.124296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:82408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.488 [2024-07-15 08:31:50.124313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:39.488 [2024-07-15 08:31:50.124334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:82416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.488 [2024-07-15 08:31:50.124349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:39.488 [2024-07-15 08:31:50.124371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:82424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.488 [2024-07-15 08:31:50.124386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:39.488 [2024-07-15 08:31:50.124407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:82432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.488 [2024-07-15 08:31:50.124422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:39.488 [2024-07-15 08:31:50.124442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:82440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.488 [2024-07-15 08:31:50.124457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:39.488 [2024-07-15 08:31:50.124478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.488 [2024-07-15 08:31:50.124494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:39.488 [2024-07-15 08:31:50.124515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:82456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.488 [2024-07-15 08:31:50.124530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:39.488 [2024-07-15 08:31:50.124551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.488 [2024-07-15 08:31:50.124566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:39.488 [2024-07-15 08:31:50.124587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:82472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.488 [2024-07-15 08:31:50.124602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:39.488 [2024-07-15 08:31:50.124622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:82480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.488 [2024-07-15 08:31:50.124637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:39.489 [2024-07-15 08:31:50.124665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:82488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.489 [2024-07-15 08:31:50.124681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:39.489 [2024-07-15 08:31:50.124702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:82496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.489 [2024-07-15 08:31:50.124730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:39.489 [2024-07-15 08:31:50.124756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:82504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.489 [2024-07-15 08:31:50.124771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:39.489 [2024-07-15 08:31:50.124798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.489 [2024-07-15 08:31:50.124814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:39.489 [2024-07-15 08:31:50.124835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:82904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.489 [2024-07-15 08:31:50.124850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:39.489 [2024-07-15 08:31:50.124872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:82912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.489 [2024-07-15 08:31:50.124887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:39.489 [2024-07-15 08:31:50.124908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:82920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.489 [2024-07-15 08:31:50.124923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:39.489 [2024-07-15 08:31:50.124944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:82928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.489 [2024-07-15 08:31:50.124959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:39.489 [2024-07-15 08:31:50.124980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:82936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.489 [2024-07-15 08:31:50.124995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:39.489 [2024-07-15 08:31:50.125016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:82944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.489 [2024-07-15 08:31:50.125031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:39.489 [2024-07-15 08:31:50.125052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:82952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.489 [2024-07-15 08:31:50.125067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:39.489 [2024-07-15 08:31:50.125087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:82512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.489 [2024-07-15 08:31:50.125103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:39.489 [2024-07-15 08:31:50.125124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:82520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.489 [2024-07-15 08:31:50.125156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:39.489 [2024-07-15 08:31:50.125178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:82528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.489 [2024-07-15 08:31:50.125194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:39.489 [2024-07-15 08:31:50.125215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:82536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.489 [2024-07-15 08:31:50.125230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:39.489 [2024-07-15 08:31:50.125251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:82544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.489 [2024-07-15 08:31:50.125266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:39.489 [2024-07-15 08:31:50.125287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.489 [2024-07-15 08:31:50.125303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:39.489 [2024-07-15 08:31:50.125324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:82560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.489 [2024-07-15 08:31:50.125339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:39.489 [2024-07-15 08:31:50.125360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.489 [2024-07-15 08:31:50.125375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:39.489 [2024-07-15 08:31:50.125396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:82576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.489 [2024-07-15 08:31:50.125411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:39.489 [2024-07-15 08:31:50.125431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:82584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.489 [2024-07-15 08:31:50.125446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:39.489 [2024-07-15 08:31:50.125467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:82592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.489 [2024-07-15 08:31:50.125483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:39.489 [2024-07-15 08:31:50.125504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:82600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.489 [2024-07-15 08:31:50.125519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:39.489 [2024-07-15 08:31:50.125539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:82608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.489 [2024-07-15 08:31:50.125554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:39.489 [2024-07-15 08:31:50.125575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:82616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.489 [2024-07-15 08:31:50.125596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:39.489 [2024-07-15 08:31:50.125618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:82624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.489 [2024-07-15 08:31:50.125634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:39.489 [2024-07-15 08:31:50.125655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:82632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.489 [2024-07-15 08:31:50.125671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:39.489 [2024-07-15 08:31:50.125695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:82960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.489 [2024-07-15 08:31:50.125712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:39.489 [2024-07-15 08:31:50.125746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:82968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.489 [2024-07-15 08:31:50.125763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:39.489 [2024-07-15 08:31:50.125784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:82976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.489 [2024-07-15 08:31:50.125804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:39.489 [2024-07-15 08:31:50.125825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.489 [2024-07-15 08:31:50.125840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:39.489 [2024-07-15 08:31:50.125861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:82992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.489 [2024-07-15 08:31:50.125876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:39.489 [2024-07-15 08:31:50.125897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:83000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.489 [2024-07-15 08:31:50.125913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:39.489 [2024-07-15 08:31:50.125934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.489 [2024-07-15 08:31:50.125948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:39.489 [2024-07-15 08:31:50.125970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:83016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.489 [2024-07-15 08:31:50.125985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:39.489 [2024-07-15 08:31:50.126006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:83024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.490 [2024-07-15 08:31:50.126021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:39.490 [2024-07-15 08:31:50.126041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:83032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.490 [2024-07-15 08:31:50.126057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:39.490 [2024-07-15 08:31:50.126089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:83040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.490 [2024-07-15 08:31:50.126104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:39.490 [2024-07-15 08:31:50.126125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:83048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.490 [2024-07-15 08:31:50.126140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:39.490 [2024-07-15 08:31:50.126161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:83056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.490 [2024-07-15 08:31:50.126176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:39.490 [2024-07-15 08:31:50.126197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:83064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.490 [2024-07-15 08:31:50.126212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:39.490 [2024-07-15 08:31:50.126232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:83072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.490 [2024-07-15 08:31:50.126247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:39.490 [2024-07-15 08:31:50.126268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:83080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.490 [2024-07-15 08:31:50.126283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:39.490 [2024-07-15 08:31:50.126304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:83088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.490 [2024-07-15 08:31:50.126319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:39.490 [2024-07-15 08:31:50.126340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:83096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.490 [2024-07-15 08:31:50.126355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:39.490 [2024-07-15 08:31:50.126376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:83104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.490 [2024-07-15 08:31:50.126391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:39.490 [2024-07-15 08:31:50.126412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:83112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.490 [2024-07-15 08:31:50.126427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:39.490 [2024-07-15 08:31:50.126448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:83120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.490 [2024-07-15 08:31:50.126463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:39.490 [2024-07-15 08:31:50.126483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:83128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.490 [2024-07-15 08:31:50.126499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:39.490 [2024-07-15 08:31:50.126527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:83136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.490 [2024-07-15 08:31:50.126543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:39.490 [2024-07-15 08:31:50.126564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:83144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.490 [2024-07-15 08:31:50.126579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:39.490 [2024-07-15 08:31:50.126600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:83152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.490 [2024-07-15 08:31:50.126615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:39.490 [2024-07-15 08:31:50.126636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:83160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.490 [2024-07-15 08:31:50.126651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:39.490 [2024-07-15 08:31:50.126672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:83168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.490 [2024-07-15 08:31:50.126687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:39.490 [2024-07-15 08:31:50.126707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:83176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.490 [2024-07-15 08:31:50.126734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:39.490 [2024-07-15 08:31:50.126758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:82640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.490 [2024-07-15 08:31:50.126773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:39.490 [2024-07-15 08:31:50.126794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:82648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.490 [2024-07-15 08:31:50.126809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:39.490 [2024-07-15 08:31:50.126830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:82656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.490 [2024-07-15 08:31:50.126845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:39.490 [2024-07-15 08:31:50.126866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:82664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.490 [2024-07-15 08:31:50.126881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:39.490 [2024-07-15 08:31:50.126902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:82672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.490 [2024-07-15 08:31:50.126917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:39.490 [2024-07-15 08:31:50.126938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:82680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.490 [2024-07-15 08:31:50.126953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:39.490 [2024-07-15 08:31:50.126974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:82688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.490 [2024-07-15 08:31:50.126996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:39.490 [2024-07-15 08:31:50.127018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:82696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.490 [2024-07-15 08:31:50.127034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:39.490 [2024-07-15 08:31:50.127055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:82704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.490 [2024-07-15 08:31:50.127078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:39.490 [2024-07-15 08:31:50.127099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:82712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.490 [2024-07-15 08:31:50.127114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:39.490 [2024-07-15 08:31:50.127136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:82720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.490 [2024-07-15 08:31:50.127151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:39.490 [2024-07-15 08:31:50.127172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:82728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.490 [2024-07-15 08:31:50.127187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:39.490 [2024-07-15 08:31:50.127207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:82736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.490 [2024-07-15 08:31:50.127222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:39.490 [2024-07-15 08:31:50.127244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:82744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.490 [2024-07-15 08:31:50.127272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:39.490 [2024-07-15 08:31:50.127295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:82752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.490 [2024-07-15 08:31:50.127310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:39.490 [2024-07-15 08:31:50.127331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:82760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.490 [2024-07-15 08:31:50.127346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:39.490 [2024-07-15 08:31:50.127370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:83184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.490 [2024-07-15 08:31:50.127386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:39.490 [2024-07-15 08:31:50.127406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:83192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.490 [2024-07-15 08:31:50.127421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:39.490 [2024-07-15 08:31:50.127442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:83200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.490 [2024-07-15 08:31:50.127464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:39.490 [2024-07-15 08:31:50.127487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:83208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.490 [2024-07-15 08:31:50.127502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:39.490 [2024-07-15 08:31:50.127527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:83216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.490 [2024-07-15 08:31:50.127543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:39.490 [2024-07-15 08:31:50.127564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:83224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.490 [2024-07-15 08:31:50.127580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:39.490 [2024-07-15 08:31:50.127601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:83232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.491 [2024-07-15 08:31:50.127616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:39.491 [2024-07-15 08:31:50.127636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:83240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.491 [2024-07-15 08:31:50.127651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:39.491 [2024-07-15 08:31:50.127672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:83248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.491 [2024-07-15 08:31:50.127688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:39.491 [2024-07-15 08:31:50.127712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:83256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.491 [2024-07-15 08:31:50.127741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:39.491 [2024-07-15 08:31:50.127763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:83264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.491 [2024-07-15 08:31:50.127779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:39.491 [2024-07-15 08:31:50.127800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.491 [2024-07-15 08:31:50.127815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:39.491 [2024-07-15 08:31:50.127836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:82768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.491 [2024-07-15 08:31:50.127851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:39.491 [2024-07-15 08:31:50.127872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:82776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.491 [2024-07-15 08:31:50.127887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:39.491 [2024-07-15 08:31:50.127908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:82784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.491 [2024-07-15 08:31:50.127922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:39.491 [2024-07-15 08:31:50.127952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:82792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.491 [2024-07-15 08:31:50.127968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:39.491 [2024-07-15 08:31:50.127989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:82800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.491 [2024-07-15 08:31:50.128003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:39.491 [2024-07-15 08:31:50.128024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:82808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.491 [2024-07-15 08:31:50.128038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:39.491 [2024-07-15 08:31:50.128059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:82816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.491 [2024-07-15 08:31:50.128075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:39.491 [2024-07-15 08:31:50.128828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:82824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.491 [2024-07-15 08:31:50.128856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:39.491 [2024-07-15 08:31:50.128892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:83280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.491 [2024-07-15 08:31:50.128909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:39.491 [2024-07-15 08:31:50.128939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:83288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.491 [2024-07-15 08:31:50.128955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:39.491 [2024-07-15 08:31:50.128985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:83296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.491 [2024-07-15 08:31:50.129000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:39.491 [2024-07-15 08:31:50.129031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:83304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.491 [2024-07-15 08:31:50.129047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:39.491 [2024-07-15 08:31:50.129076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.491 [2024-07-15 08:31:50.129092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:39.491 [2024-07-15 08:31:50.129121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:83320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.491 [2024-07-15 08:31:50.129137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:39.491 [2024-07-15 08:31:50.129167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.491 [2024-07-15 08:31:50.129182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:39.491 [2024-07-15 08:31:50.129242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:83336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.491 [2024-07-15 08:31:50.129262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:39.491 [2024-07-15 08:31:57.225153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:88008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.491 [2024-07-15 08:31:57.225235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:39.491 [2024-07-15 08:31:57.225297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:88016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.491 [2024-07-15 08:31:57.225319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:39.491 [2024-07-15 08:31:57.225342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:88024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.491 [2024-07-15 08:31:57.225358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:39.491 [2024-07-15 08:31:57.225379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:88032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.491 [2024-07-15 08:31:57.225395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:39.491 [2024-07-15 08:31:57.225415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:88040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.491 [2024-07-15 08:31:57.225430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:39.491 [2024-07-15 08:31:57.225451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:88048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.491 [2024-07-15 08:31:57.225466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:39.491 [2024-07-15 08:31:57.225487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:88056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.491 [2024-07-15 08:31:57.225502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:39.491 [2024-07-15 08:31:57.225522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:88064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.491 [2024-07-15 08:31:57.225537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:39.491 [2024-07-15 08:31:57.225558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:87496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.491 [2024-07-15 08:31:57.225573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:39.491 [2024-07-15 08:31:57.225594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:87504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.491 [2024-07-15 08:31:57.225609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:39.491 [2024-07-15 08:31:57.225630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:87512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.491 [2024-07-15 08:31:57.225645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:39.491 [2024-07-15 08:31:57.225667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:87520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.491 [2024-07-15 08:31:57.225707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:39.491 [2024-07-15 08:31:57.225745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:87528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.491 [2024-07-15 08:31:57.225762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:39.491 [2024-07-15 08:31:57.225783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:87536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.491 [2024-07-15 08:31:57.225798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:39.491 [2024-07-15 08:31:57.225824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:87544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.491 [2024-07-15 08:31:57.225839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:39.491 [2024-07-15 08:31:57.225859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:87552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.491 [2024-07-15 08:31:57.225874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:39.491 [2024-07-15 08:31:57.225895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:87560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.491 [2024-07-15 08:31:57.225910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:39.491 [2024-07-15 08:31:57.225933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:87568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.491 [2024-07-15 08:31:57.225948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:39.491 [2024-07-15 08:31:57.225969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:87576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.491 [2024-07-15 08:31:57.225984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:39.491 [2024-07-15 08:31:57.226005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:87584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.492 [2024-07-15 08:31:57.226020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:39.492 [2024-07-15 08:31:57.226041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:87592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.492 [2024-07-15 08:31:57.226056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:39.492 [2024-07-15 08:31:57.226077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:87600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.492 [2024-07-15 08:31:57.226092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:39.492 [2024-07-15 08:31:57.226113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:87608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.492 [2024-07-15 08:31:57.226128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:39.492 [2024-07-15 08:31:57.226148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:87616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.492 [2024-07-15 08:31:57.226173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:39.492 [2024-07-15 08:31:57.226196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:87624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.492 [2024-07-15 08:31:57.226211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:39.492 [2024-07-15 08:31:57.226232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:87632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.492 [2024-07-15 08:31:57.226247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:39.492 [2024-07-15 08:31:57.226268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:87640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.492 [2024-07-15 08:31:57.226283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:39.492 [2024-07-15 08:31:57.226305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:87648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.492 [2024-07-15 08:31:57.226320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:39.492 [2024-07-15 08:31:57.226341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:87656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.492 [2024-07-15 08:31:57.226355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:39.492 [2024-07-15 08:31:57.226377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:87664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.492 [2024-07-15 08:31:57.226392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:39.492 [2024-07-15 08:31:57.226413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:87672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.492 [2024-07-15 08:31:57.226428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:39.492 [2024-07-15 08:31:57.226454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:87680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.492 [2024-07-15 08:31:57.226470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:39.492 [2024-07-15 08:31:57.226497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:88072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.492 [2024-07-15 08:31:57.226514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:39.492 [2024-07-15 08:31:57.226536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:88080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.492 [2024-07-15 08:31:57.226551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:39.492 [2024-07-15 08:31:57.226572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:88088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.492 [2024-07-15 08:31:57.226588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:39.492 [2024-07-15 08:31:57.226609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:88096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.492 [2024-07-15 08:31:57.226631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:39.492 [2024-07-15 08:31:57.226653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:88104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.492 [2024-07-15 08:31:57.226669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:39.492 [2024-07-15 08:31:57.226690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:88112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.492 [2024-07-15 08:31:57.226705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:39.492 [2024-07-15 08:31:57.226739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:88120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.492 [2024-07-15 08:31:57.226756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:39.492 [2024-07-15 08:31:57.226777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:88128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.492 [2024-07-15 08:31:57.226793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:39.492 [2024-07-15 08:31:57.226814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:87688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.492 [2024-07-15 08:31:57.226829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:39.492 [2024-07-15 08:31:57.226850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:87696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.492 [2024-07-15 08:31:57.226865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:39.492 [2024-07-15 08:31:57.226886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:87704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.492 [2024-07-15 08:31:57.226901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:39.492 [2024-07-15 08:31:57.226922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:87712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.492 [2024-07-15 08:31:57.226937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:39.492 [2024-07-15 08:31:57.226958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:87720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.492 [2024-07-15 08:31:57.226974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:39.492 [2024-07-15 08:31:57.226995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:87728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.492 [2024-07-15 08:31:57.227010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:39.492 [2024-07-15 08:31:57.227031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:87736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.492 [2024-07-15 08:31:57.227046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:39.492 [2024-07-15 08:31:57.227066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:87744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.492 [2024-07-15 08:31:57.227081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:39.492 [2024-07-15 08:31:57.227111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:87752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.492 [2024-07-15 08:31:57.227128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:39.492 [2024-07-15 08:31:57.227149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:87760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.492 [2024-07-15 08:31:57.227164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:39.492 [2024-07-15 08:31:57.227185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:87768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.492 [2024-07-15 08:31:57.227201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:39.492 [2024-07-15 08:31:57.227222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:87776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.492 [2024-07-15 08:31:57.227237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:39.492 [2024-07-15 08:31:57.227268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:87784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.492 [2024-07-15 08:31:57.227286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:39.492 [2024-07-15 08:31:57.227307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:87792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.492 [2024-07-15 08:31:57.227323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:39.492 [2024-07-15 08:31:57.227344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:87800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.492 [2024-07-15 08:31:57.227360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:39.492 [2024-07-15 08:31:57.227381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:87808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.492 [2024-07-15 08:31:57.227396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:39.492 [2024-07-15 08:31:57.227438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:88136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.492 [2024-07-15 08:31:57.227457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:39.492 [2024-07-15 08:31:57.227480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:88144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.492 [2024-07-15 08:31:57.227496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:39.492 [2024-07-15 08:31:57.227517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:88152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.492 [2024-07-15 08:31:57.227532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:39.492 [2024-07-15 08:31:57.227560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:88160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.492 [2024-07-15 08:31:57.227575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:39.492 [2024-07-15 08:31:57.227604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:88168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.492 [2024-07-15 08:31:57.227621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:39.493 [2024-07-15 08:31:57.227642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:88176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.493 [2024-07-15 08:31:57.227657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:39.493 [2024-07-15 08:31:57.227678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:88184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.493 [2024-07-15 08:31:57.227693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:39.493 [2024-07-15 08:31:57.227714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:88192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.493 [2024-07-15 08:31:57.227745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:39.493 [2024-07-15 08:31:57.227768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:88200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.493 [2024-07-15 08:31:57.227784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:39.493 [2024-07-15 08:31:57.227805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:88208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.493 [2024-07-15 08:31:57.227820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:39.493 [2024-07-15 08:31:57.227842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:88216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.493 [2024-07-15 08:31:57.227857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:39.493 [2024-07-15 08:31:57.227878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:88224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.493 [2024-07-15 08:31:57.227893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:39.493 [2024-07-15 08:31:57.227914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.493 [2024-07-15 08:31:57.227929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:39.493 [2024-07-15 08:31:57.227950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:88240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.493 [2024-07-15 08:31:57.227965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:39.493 [2024-07-15 08:31:57.227998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:88248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.493 [2024-07-15 08:31:57.228014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:39.493 [2024-07-15 08:31:57.228035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:88256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.493 [2024-07-15 08:31:57.228051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:39.493 [2024-07-15 08:31:57.228072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:88264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.493 [2024-07-15 08:31:57.228095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.493 [2024-07-15 08:31:57.228117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.493 [2024-07-15 08:31:57.228133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:39.493 [2024-07-15 08:31:57.228154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:88280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.493 [2024-07-15 08:31:57.228169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:39.493 [2024-07-15 08:31:57.228190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:88288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.493 [2024-07-15 08:31:57.228205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:39.493 [2024-07-15 08:31:57.228226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:87816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.493 [2024-07-15 08:31:57.228242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:39.493 [2024-07-15 08:31:57.228263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:87824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.493 [2024-07-15 08:31:57.228278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:39.493 [2024-07-15 08:31:57.228299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:87832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.493 [2024-07-15 08:31:57.228314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:39.493 [2024-07-15 08:31:57.228335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:87840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.493 [2024-07-15 08:31:57.228351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:39.493 [2024-07-15 08:31:57.228372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:87848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.493 [2024-07-15 08:31:57.228388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:39.493 [2024-07-15 08:31:57.228409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.493 [2024-07-15 08:31:57.228424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:39.493 [2024-07-15 08:31:57.228445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:87864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.493 [2024-07-15 08:31:57.228460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:39.493 [2024-07-15 08:31:57.228481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:87872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.493 [2024-07-15 08:31:57.228496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:39.493 [2024-07-15 08:31:57.228517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:88296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.493 [2024-07-15 08:31:57.228538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:39.493 [2024-07-15 08:31:57.228560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:88304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.493 [2024-07-15 08:31:57.228576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:39.493 [2024-07-15 08:31:57.228602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:88312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.493 [2024-07-15 08:31:57.228617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:39.493 [2024-07-15 08:31:57.228639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:88320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.493 [2024-07-15 08:31:57.228654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:39.493 [2024-07-15 08:31:57.228675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:88328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.493 [2024-07-15 08:31:57.228690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:39.493 [2024-07-15 08:31:57.228711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:88336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.493 [2024-07-15 08:31:57.228737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:39.493 [2024-07-15 08:31:57.228760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:88344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.493 [2024-07-15 08:31:57.228775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:39.493 [2024-07-15 08:31:57.228796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:88352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.493 [2024-07-15 08:31:57.228811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:39.493 [2024-07-15 08:31:57.228832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:88360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.494 [2024-07-15 08:31:57.228847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:39.494 [2024-07-15 08:31:57.228868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:88368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.494 [2024-07-15 08:31:57.228883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:39.494 [2024-07-15 08:31:57.228904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.494 [2024-07-15 08:31:57.228919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:39.494 [2024-07-15 08:31:57.228940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.494 [2024-07-15 08:31:57.228955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:39.494 [2024-07-15 08:31:57.228981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:88392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.494 [2024-07-15 08:31:57.228997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:39.494 [2024-07-15 08:31:57.229027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:88400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.494 [2024-07-15 08:31:57.229043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:39.494 [2024-07-15 08:31:57.229064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:88408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.494 [2024-07-15 08:31:57.229079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:39.494 [2024-07-15 08:31:57.229100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:88416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.494 [2024-07-15 08:31:57.229116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:39.494 [2024-07-15 08:31:57.229137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:88424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.494 [2024-07-15 08:31:57.229152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:39.494 [2024-07-15 08:31:57.229173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:88432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.494 [2024-07-15 08:31:57.229188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:39.494 [2024-07-15 08:31:57.229220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:87880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.494 [2024-07-15 08:31:57.229235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:39.494 [2024-07-15 08:31:57.229256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:87888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.494 [2024-07-15 08:31:57.229271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:39.494 [2024-07-15 08:31:57.229292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:87896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.494 [2024-07-15 08:31:57.229314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:39.494 [2024-07-15 08:31:57.229335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:87904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.494 [2024-07-15 08:31:57.229350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:39.494 [2024-07-15 08:31:57.229372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:87912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.494 [2024-07-15 08:31:57.229387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:39.494 [2024-07-15 08:31:57.229408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:87920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.494 [2024-07-15 08:31:57.229423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:39.494 [2024-07-15 08:31:57.229444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:87928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.494 [2024-07-15 08:31:57.229459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:39.494 [2024-07-15 08:31:57.229486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:87936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.494 [2024-07-15 08:31:57.229502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:39.494 [2024-07-15 08:31:57.229524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:87944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.494 [2024-07-15 08:31:57.229539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:39.494 [2024-07-15 08:31:57.229567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:87952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.494 [2024-07-15 08:31:57.229582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:39.494 [2024-07-15 08:31:57.229604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:87960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.494 [2024-07-15 08:31:57.229619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:39.494 [2024-07-15 08:31:57.229640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:87968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.494 [2024-07-15 08:31:57.229655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:39.494 [2024-07-15 08:31:57.229675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:87976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.494 [2024-07-15 08:31:57.229690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:39.494 [2024-07-15 08:31:57.229712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:87984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.494 [2024-07-15 08:31:57.229742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:39.494 [2024-07-15 08:31:57.229765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:87992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.494 [2024-07-15 08:31:57.229781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:39.494 [2024-07-15 08:31:57.230541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:88000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.494 [2024-07-15 08:31:57.230569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:39.494 [2024-07-15 08:31:57.230611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:88440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.494 [2024-07-15 08:31:57.230629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:39.494 [2024-07-15 08:31:57.230660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:88448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.494 [2024-07-15 08:31:57.230675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:39.494 [2024-07-15 08:31:57.230705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:88456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.494 [2024-07-15 08:31:57.230736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:39.494 [2024-07-15 08:31:57.230770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:88464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.494 [2024-07-15 08:31:57.230800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:39.494 [2024-07-15 08:31:57.230832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:88472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.494 [2024-07-15 08:31:57.230847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:39.494 [2024-07-15 08:31:57.230877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:88480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.494 [2024-07-15 08:31:57.230893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:39.494 [2024-07-15 08:31:57.230927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:88488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.494 [2024-07-15 08:31:57.230943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:39.494 [2024-07-15 08:31:57.230988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:88496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.494 [2024-07-15 08:31:57.231007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:39.494 [2024-07-15 08:31:57.231038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:88504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.494 [2024-07-15 08:31:57.231055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:39.494 [2024-07-15 08:31:57.231091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:88512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.494 [2024-07-15 08:31:57.231107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:39.494 [2024-07-15 08:32:10.616428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:2176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.494 [2024-07-15 08:32:10.616481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:39.494 [2024-07-15 08:32:10.616538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:2184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.494 [2024-07-15 08:32:10.616561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:39.494 [2024-07-15 08:32:10.616584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.494 [2024-07-15 08:32:10.616600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.494 [2024-07-15 08:32:10.616621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.494 [2024-07-15 08:32:10.616637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:39.494 [2024-07-15 08:32:10.616658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:2208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.494 [2024-07-15 08:32:10.616673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:39.494 [2024-07-15 08:32:10.616694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:2216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.494 [2024-07-15 08:32:10.616749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:39.494 [2024-07-15 08:32:10.616775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:2224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.495 [2024-07-15 08:32:10.616791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:39.495 [2024-07-15 08:32:10.616813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:2232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.495 [2024-07-15 08:32:10.616828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:39.495 [2024-07-15 08:32:10.616849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:2240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.495 [2024-07-15 08:32:10.616864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:39.495 [2024-07-15 08:32:10.616885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:2248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.495 [2024-07-15 08:32:10.616900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:39.495 [2024-07-15 08:32:10.616921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:2256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.495 [2024-07-15 08:32:10.616935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:39.495 [2024-07-15 08:32:10.616957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.495 [2024-07-15 08:32:10.616972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:39.495 [2024-07-15 08:32:10.616993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:2272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.495 [2024-07-15 08:32:10.617007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:39.495 [2024-07-15 08:32:10.617028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.495 [2024-07-15 08:32:10.617043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:39.495 [2024-07-15 08:32:10.617064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:2288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.495 [2024-07-15 08:32:10.617079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:39.495 [2024-07-15 08:32:10.617109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:2296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.495 [2024-07-15 08:32:10.617124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:39.495 [2024-07-15 08:32:10.617146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:1856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.495 [2024-07-15 08:32:10.617161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:39.495 [2024-07-15 08:32:10.617185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:1864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.495 [2024-07-15 08:32:10.617200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:39.495 [2024-07-15 08:32:10.617231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.495 [2024-07-15 08:32:10.617247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:39.495 [2024-07-15 08:32:10.617269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:1880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.495 [2024-07-15 08:32:10.617284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:39.495 [2024-07-15 08:32:10.617305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:1888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.495 [2024-07-15 08:32:10.617320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:39.495 [2024-07-15 08:32:10.617341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:1896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.495 [2024-07-15 08:32:10.617356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:39.495 [2024-07-15 08:32:10.617378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:1904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.495 [2024-07-15 08:32:10.617393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:39.495 [2024-07-15 08:32:10.617414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:1912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.495 [2024-07-15 08:32:10.617429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:39.495 [2024-07-15 08:32:10.617479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:1840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.495 [2024-07-15 08:32:10.617500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.495 [2024-07-15 08:32:10.617517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:1848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.495 [2024-07-15 08:32:10.617531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.495 [2024-07-15 08:32:10.617546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:2304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.495 [2024-07-15 08:32:10.617560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.495 [2024-07-15 08:32:10.617575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:2312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.495 [2024-07-15 08:32:10.617588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.495 [2024-07-15 08:32:10.617603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:2320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.495 [2024-07-15 08:32:10.617617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.495 [2024-07-15 08:32:10.617632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:2328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.495 [2024-07-15 08:32:10.617651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.495 [2024-07-15 08:32:10.617666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:2336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.495 [2024-07-15 08:32:10.617689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.495 [2024-07-15 08:32:10.617705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.495 [2024-07-15 08:32:10.617732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.495 [2024-07-15 08:32:10.617750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:2352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.495 [2024-07-15 08:32:10.617764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.495 [2024-07-15 08:32:10.617779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:2360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.495 [2024-07-15 08:32:10.617793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.495 [2024-07-15 08:32:10.617808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:2368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.495 [2024-07-15 08:32:10.617821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.495 [2024-07-15 08:32:10.617836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:2376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.495 [2024-07-15 08:32:10.617849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.495 [2024-07-15 08:32:10.617864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:2384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.495 [2024-07-15 08:32:10.617877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.495 [2024-07-15 08:32:10.617892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:2392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.495 [2024-07-15 08:32:10.617905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.495 [2024-07-15 08:32:10.617920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:2400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.495 [2024-07-15 08:32:10.617934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.495 [2024-07-15 08:32:10.617949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:2408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.495 [2024-07-15 08:32:10.617961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.495 [2024-07-15 08:32:10.617976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:2416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.495 [2024-07-15 08:32:10.617989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.495 [2024-07-15 08:32:10.618004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:2424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.495 [2024-07-15 08:32:10.618017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.495 [2024-07-15 08:32:10.618032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.495 [2024-07-15 08:32:10.618045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.495 [2024-07-15 08:32:10.618068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:1928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.495 [2024-07-15 08:32:10.618082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.495 [2024-07-15 08:32:10.618097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:1936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.495 [2024-07-15 08:32:10.618110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.495 [2024-07-15 08:32:10.618125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:1944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.495 [2024-07-15 08:32:10.618138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.495 [2024-07-15 08:32:10.618153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.495 [2024-07-15 08:32:10.618166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.495 [2024-07-15 08:32:10.618181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:1960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.495 [2024-07-15 08:32:10.618194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.496 [2024-07-15 08:32:10.618210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.496 [2024-07-15 08:32:10.618223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.496 [2024-07-15 08:32:10.618239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:1976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.496 [2024-07-15 08:32:10.618252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.496 [2024-07-15 08:32:10.618267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.496 [2024-07-15 08:32:10.618280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.496 [2024-07-15 08:32:10.618294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:1992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.496 [2024-07-15 08:32:10.618308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.496 [2024-07-15 08:32:10.618323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:2000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.496 [2024-07-15 08:32:10.618336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.496 [2024-07-15 08:32:10.618350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.496 [2024-07-15 08:32:10.618364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.496 [2024-07-15 08:32:10.618379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:2016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.496 [2024-07-15 08:32:10.618392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.496 [2024-07-15 08:32:10.618406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:2024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.496 [2024-07-15 08:32:10.618425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.496 [2024-07-15 08:32:10.618440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:2032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.496 [2024-07-15 08:32:10.618454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.496 [2024-07-15 08:32:10.618468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:2040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.496 [2024-07-15 08:32:10.618482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.496 [2024-07-15 08:32:10.618497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:2432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.496 [2024-07-15 08:32:10.618510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.496 [2024-07-15 08:32:10.618525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:2440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.496 [2024-07-15 08:32:10.618538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.496 [2024-07-15 08:32:10.618553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:2448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.496 [2024-07-15 08:32:10.618566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.496 [2024-07-15 08:32:10.618581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:2456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.496 [2024-07-15 08:32:10.618594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.496 [2024-07-15 08:32:10.618609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.496 [2024-07-15 08:32:10.618622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.496 [2024-07-15 08:32:10.618637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:2472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.496 [2024-07-15 08:32:10.618651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.496 [2024-07-15 08:32:10.618666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:2480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.496 [2024-07-15 08:32:10.618679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.496 [2024-07-15 08:32:10.618694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:2488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.496 [2024-07-15 08:32:10.618707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.496 [2024-07-15 08:32:10.618734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:2496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.496 [2024-07-15 08:32:10.618750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.496 [2024-07-15 08:32:10.618764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:2504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.496 [2024-07-15 08:32:10.618778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.496 [2024-07-15 08:32:10.618793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.496 [2024-07-15 08:32:10.618813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.496 [2024-07-15 08:32:10.618829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:2520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.496 [2024-07-15 08:32:10.618843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.496 [2024-07-15 08:32:10.618858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.496 [2024-07-15 08:32:10.618871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.496 [2024-07-15 08:32:10.618886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:2536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.496 [2024-07-15 08:32:10.618899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.496 [2024-07-15 08:32:10.618914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:2544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.496 [2024-07-15 08:32:10.618927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.496 [2024-07-15 08:32:10.618942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:2552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.496 [2024-07-15 08:32:10.618955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.496 [2024-07-15 08:32:10.618970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:2048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.496 [2024-07-15 08:32:10.618983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.496 [2024-07-15 08:32:10.618998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:2056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.496 [2024-07-15 08:32:10.619012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.496 [2024-07-15 08:32:10.619027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:2064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.496 [2024-07-15 08:32:10.619040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.496 [2024-07-15 08:32:10.619054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:2072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.496 [2024-07-15 08:32:10.619068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.496 [2024-07-15 08:32:10.619082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:2080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.496 [2024-07-15 08:32:10.619096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.496 [2024-07-15 08:32:10.619111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:2088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.496 [2024-07-15 08:32:10.619124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.496 [2024-07-15 08:32:10.619140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:2096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.496 [2024-07-15 08:32:10.619153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.496 [2024-07-15 08:32:10.619173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:2104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.496 [2024-07-15 08:32:10.619187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.496 [2024-07-15 08:32:10.619202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:2560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.496 [2024-07-15 08:32:10.619215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.496 [2024-07-15 08:32:10.619230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.496 [2024-07-15 08:32:10.619244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.496 [2024-07-15 08:32:10.619269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:2576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.496 [2024-07-15 08:32:10.619284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.496 [2024-07-15 08:32:10.619299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.496 [2024-07-15 08:32:10.619320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.496 [2024-07-15 08:32:10.619335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:2592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.496 [2024-07-15 08:32:10.619349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.496 [2024-07-15 08:32:10.619373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:2600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.496 [2024-07-15 08:32:10.619386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.496 [2024-07-15 08:32:10.619401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:2608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.496 [2024-07-15 08:32:10.619414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.496 [2024-07-15 08:32:10.619428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:2616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.496 [2024-07-15 08:32:10.619441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.496 [2024-07-15 08:32:10.619456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:2624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.496 [2024-07-15 08:32:10.619469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.497 [2024-07-15 08:32:10.619484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:2632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.497 [2024-07-15 08:32:10.619497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.497 [2024-07-15 08:32:10.619511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:2640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.497 [2024-07-15 08:32:10.619524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.497 [2024-07-15 08:32:10.619539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:2648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.497 [2024-07-15 08:32:10.619558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.497 [2024-07-15 08:32:10.619574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:2656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.497 [2024-07-15 08:32:10.619587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.497 [2024-07-15 08:32:10.619602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:2664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.497 [2024-07-15 08:32:10.619615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.497 [2024-07-15 08:32:10.619630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:2672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.497 [2024-07-15 08:32:10.619643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.497 [2024-07-15 08:32:10.619658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:2680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.497 [2024-07-15 08:32:10.619670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.497 [2024-07-15 08:32:10.619685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:2688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.497 [2024-07-15 08:32:10.619698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.497 [2024-07-15 08:32:10.619713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:2696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.497 [2024-07-15 08:32:10.619738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.497 [2024-07-15 08:32:10.619753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.497 [2024-07-15 08:32:10.619767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.497 [2024-07-15 08:32:10.619782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:2712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.497 [2024-07-15 08:32:10.619800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.497 [2024-07-15 08:32:10.619816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:2112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.497 [2024-07-15 08:32:10.619829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.497 [2024-07-15 08:32:10.619844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:2120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.497 [2024-07-15 08:32:10.619857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.497 [2024-07-15 08:32:10.619872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.497 [2024-07-15 08:32:10.619886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.497 [2024-07-15 08:32:10.619900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:2136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.497 [2024-07-15 08:32:10.619914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.497 [2024-07-15 08:32:10.619928] nvme_qpair. 08:32:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:39.497 c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:2144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.497 [2024-07-15 08:32:10.619948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.497 [2024-07-15 08:32:10.619964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:2152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.497 [2024-07-15 08:32:10.619977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.497 [2024-07-15 08:32:10.619992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:2160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.497 [2024-07-15 08:32:10.620005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.497 [2024-07-15 08:32:10.620019] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7156d0 is same with the state(5) to be set 00:19:39.497 [2024-07-15 08:32:10.620035] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:39.497 [2024-07-15 08:32:10.620046] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:39.497 [2024-07-15 08:32:10.620056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2168 len:8 PRP1 0x0 PRP2 0x0 00:19:39.497 [2024-07-15 08:32:10.620069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.497 [2024-07-15 08:32:10.620083] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:39.497 [2024-07-15 08:32:10.620094] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:39.497 [2024-07-15 08:32:10.620104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:8 PRP1 0x0 PRP2 0x0 00:19:39.497 [2024-07-15 08:32:10.620117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.497 [2024-07-15 08:32:10.620130] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:39.497 [2024-07-15 08:32:10.620139] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:39.497 [2024-07-15 08:32:10.620149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2728 len:8 PRP1 0x0 PRP2 0x0 00:19:39.497 [2024-07-15 08:32:10.620161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.497 [2024-07-15 08:32:10.620174] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:39.497 [2024-07-15 08:32:10.620184] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:39.497 [2024-07-15 08:32:10.620193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2736 len:8 PRP1 0x0 PRP2 0x0 00:19:39.497 [2024-07-15 08:32:10.620206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.497 [2024-07-15 08:32:10.620224] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:39.497 [2024-07-15 08:32:10.620234] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:39.497 [2024-07-15 08:32:10.620244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2744 len:8 PRP1 0x0 PRP2 0x0 00:19:39.497 [2024-07-15 08:32:10.620256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.497 [2024-07-15 08:32:10.620269] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:39.497 [2024-07-15 08:32:10.620279] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:39.497 [2024-07-15 08:32:10.620289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2752 len:8 PRP1 0x0 PRP2 0x0 00:19:39.497 [2024-07-15 08:32:10.620307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.497 [2024-07-15 08:32:10.620321] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:39.497 [2024-07-15 08:32:10.620331] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:39.497 [2024-07-15 08:32:10.620340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2760 len:8 PRP1 0x0 PRP2 0x0 00:19:39.497 [2024-07-15 08:32:10.620353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.497 [2024-07-15 08:32:10.620366] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:39.497 [2024-07-15 08:32:10.620375] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:39.497 [2024-07-15 08:32:10.620385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2768 len:8 PRP1 0x0 PRP2 0x0 00:19:39.497 [2024-07-15 08:32:10.620397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.497 [2024-07-15 08:32:10.620410] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:39.497 [2024-07-15 08:32:10.620420] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:39.497 [2024-07-15 08:32:10.620430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2776 len:8 PRP1 0x0 PRP2 0x0 00:19:39.498 [2024-07-15 08:32:10.620442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.498 [2024-07-15 08:32:10.620455] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:39.498 [2024-07-15 08:32:10.620465] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:39.498 [2024-07-15 08:32:10.620474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:8 PRP1 0x0 PRP2 0x0 00:19:39.498 [2024-07-15 08:32:10.620487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.498 [2024-07-15 08:32:10.620500] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:39.498 [2024-07-15 08:32:10.620509] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:39.498 [2024-07-15 08:32:10.620519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2792 len:8 PRP1 0x0 PRP2 0x0 00:19:39.498 [2024-07-15 08:32:10.620532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.498 [2024-07-15 08:32:10.620545] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:39.498 [2024-07-15 08:32:10.620554] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:39.498 [2024-07-15 08:32:10.620564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2800 len:8 PRP1 0x0 PRP2 0x0 00:19:39.498 [2024-07-15 08:32:10.620577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.498 [2024-07-15 08:32:10.620595] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:39.498 [2024-07-15 08:32:10.620605] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:39.498 [2024-07-15 08:32:10.620614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2808 len:8 PRP1 0x0 PRP2 0x0 00:19:39.498 [2024-07-15 08:32:10.620627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.498 [2024-07-15 08:32:10.620640] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:39.498 [2024-07-15 08:32:10.620650] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:39.498 [2024-07-15 08:32:10.620664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:8 PRP1 0x0 PRP2 0x0 00:19:39.498 [2024-07-15 08:32:10.620678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.498 [2024-07-15 08:32:10.620691] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:39.498 [2024-07-15 08:32:10.620701] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:39.498 [2024-07-15 08:32:10.620711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2824 len:8 PRP1 0x0 PRP2 0x0 00:19:39.498 [2024-07-15 08:32:10.620735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.498 [2024-07-15 08:32:10.620749] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:39.498 [2024-07-15 08:32:10.620759] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:39.498 [2024-07-15 08:32:10.620769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2832 len:8 PRP1 0x0 PRP2 0x0 00:19:39.498 [2024-07-15 08:32:10.620782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.498 [2024-07-15 08:32:10.620795] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:39.498 [2024-07-15 08:32:10.620804] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:39.498 [2024-07-15 08:32:10.620814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2840 len:8 PRP1 0x0 PRP2 0x0 00:19:39.498 [2024-07-15 08:32:10.620830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.498 [2024-07-15 08:32:10.620844] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:39.498 [2024-07-15 08:32:10.620853] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:39.498 [2024-07-15 08:32:10.620863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2848 len:8 PRP1 0x0 PRP2 0x0 00:19:39.498 [2024-07-15 08:32:10.620875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.498 [2024-07-15 08:32:10.620888] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:39.498 [2024-07-15 08:32:10.620897] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:39.498 [2024-07-15 08:32:10.620907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2856 len:8 PRP1 0x0 PRP2 0x0 00:19:39.498 [2024-07-15 08:32:10.620919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.498 [2024-07-15 08:32:10.620975] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x7156d0 was disconnected and freed. reset controller. 00:19:39.498 [2024-07-15 08:32:10.622153] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:39.498 [2024-07-15 08:32:10.622239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.498 [2024-07-15 08:32:10.622262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.498 [2024-07-15 08:32:10.622298] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x68f100 (9): Bad file descriptor 00:19:39.498 [2024-07-15 08:32:10.622711] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:39.498 [2024-07-15 08:32:10.622758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x68f100 with addr=10.0.0.2, port=4421 00:19:39.498 [2024-07-15 08:32:10.622775] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68f100 is same with the state(5) to be set 00:19:39.498 [2024-07-15 08:32:10.622878] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x68f100 (9): Bad file descriptor 00:19:39.498 [2024-07-15 08:32:10.622914] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:39.498 [2024-07-15 08:32:10.622929] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:39.498 [2024-07-15 08:32:10.622943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:39.498 [2024-07-15 08:32:10.622975] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:39.498 [2024-07-15 08:32:10.622991] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:39.498 [2024-07-15 08:32:20.679476] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:39.498 Received shutdown signal, test time was about 55.608140 seconds 00:19:39.498 00:19:39.498 Latency(us) 00:19:39.498 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:39.498 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:39.498 Verification LBA range: start 0x0 length 0x4000 00:19:39.498 Nvme0n1 : 55.61 7164.62 27.99 0.00 0.00 17832.16 1139.43 7046430.72 00:19:39.498 =================================================================================================================== 00:19:39.498 Total : 7164.62 27.99 0.00 0.00 17832.16 1139.43 7046430.72 00:19:39.498 08:32:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:19:39.498 08:32:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:39.498 08:32:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:19:39.498 08:32:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:39.498 08:32:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@117 -- # sync 00:19:39.498 08:32:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:39.498 08:32:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@120 -- # set +e 00:19:39.498 08:32:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:39.498 08:32:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:39.498 rmmod nvme_tcp 00:19:39.498 rmmod nvme_fabrics 00:19:39.498 rmmod nvme_keyring 00:19:39.498 08:32:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:39.498 08:32:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@124 -- # set -e 00:19:39.498 08:32:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@125 -- # return 0 00:19:39.498 08:32:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@489 -- # '[' -n 81098 ']' 00:19:39.498 08:32:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@490 -- # killprocess 81098 00:19:39.498 08:32:31 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@948 -- # '[' -z 81098 ']' 00:19:39.498 08:32:31 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # kill -0 81098 00:19:39.498 08:32:31 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # uname 00:19:39.498 08:32:31 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:39.498 08:32:31 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81098 00:19:39.498 08:32:31 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:39.498 killing process with pid 81098 00:19:39.498 08:32:31 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:39.498 08:32:31 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81098' 00:19:39.498 08:32:31 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@967 -- # kill 81098 00:19:39.498 08:32:31 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@972 -- # wait 81098 00:19:39.758 08:32:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:39.758 08:32:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:39.758 08:32:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:39.758 08:32:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:39.758 08:32:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:39.758 08:32:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:39.758 08:32:31 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:39.758 08:32:31 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:39.758 08:32:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:39.758 00:19:39.758 real 1m1.974s 00:19:39.758 user 2m51.151s 00:19:39.758 sys 0m19.264s 00:19:39.758 08:32:31 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:39.758 08:32:31 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:19:39.758 ************************************ 00:19:39.758 END TEST nvmf_host_multipath 00:19:39.758 ************************************ 00:19:39.758 08:32:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:39.758 08:32:31 nvmf_tcp -- nvmf/nvmf.sh@118 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:19:39.758 08:32:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:39.758 08:32:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:39.758 08:32:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:39.758 ************************************ 00:19:39.758 START TEST nvmf_timeout 00:19:39.758 ************************************ 00:19:39.758 08:32:31 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:19:39.758 * Looking for test storage... 00:19:39.758 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:39.758 08:32:31 nvmf_tcp.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:39.758 08:32:31 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:19:39.758 08:32:31 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:39.758 08:32:31 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:39.758 08:32:31 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:39.758 08:32:31 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:39.758 08:32:31 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:39.758 08:32:31 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:39.758 08:32:31 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:39.758 08:32:31 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:39.758 08:32:31 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:39.758 08:32:31 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:39.758 08:32:31 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:19:39.758 08:32:31 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:19:39.758 08:32:31 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:39.758 08:32:31 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:39.758 08:32:31 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:39.758 08:32:31 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:39.758 08:32:31 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:39.758 08:32:31 nvmf_tcp.nvmf_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:39.758 08:32:31 nvmf_tcp.nvmf_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:39.758 08:32:31 nvmf_tcp.nvmf_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:39.758 08:32:31 nvmf_tcp.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.758 08:32:31 nvmf_tcp.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.758 08:32:31 nvmf_tcp.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.758 08:32:31 nvmf_tcp.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:19:39.758 08:32:31 nvmf_tcp.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.758 08:32:31 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@47 -- # : 0 00:19:39.758 08:32:31 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:39.758 08:32:31 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:39.758 08:32:31 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:39.758 08:32:31 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:39.758 08:32:31 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:39.758 08:32:31 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:39.758 08:32:31 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:39.758 08:32:31 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:40.017 08:32:31 nvmf_tcp.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:40.017 08:32:31 nvmf_tcp.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:40.017 08:32:31 nvmf_tcp.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:40.017 08:32:31 nvmf_tcp.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:19:40.017 08:32:31 nvmf_tcp.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:40.017 08:32:31 nvmf_tcp.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:19:40.017 08:32:31 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:40.017 08:32:31 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:40.017 08:32:31 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:40.017 08:32:31 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:40.017 08:32:31 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:40.017 08:32:31 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:40.017 08:32:31 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:40.017 08:32:31 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:40.017 08:32:31 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:40.017 08:32:31 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:40.017 08:32:31 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:40.017 08:32:31 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:40.017 08:32:31 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:40.017 08:32:31 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:40.018 08:32:31 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:40.018 08:32:31 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:40.018 08:32:31 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:40.018 08:32:31 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:40.018 08:32:31 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:40.018 08:32:31 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:40.018 08:32:31 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:40.018 08:32:31 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:40.018 08:32:31 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:40.018 08:32:31 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:40.018 08:32:31 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:40.018 08:32:31 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:40.018 08:32:31 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:40.018 08:32:31 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:40.018 Cannot find device "nvmf_tgt_br" 00:19:40.018 08:32:31 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # true 00:19:40.018 08:32:31 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:40.018 Cannot find device "nvmf_tgt_br2" 00:19:40.018 08:32:31 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # true 00:19:40.018 08:32:31 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:40.018 08:32:31 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:40.018 Cannot find device "nvmf_tgt_br" 00:19:40.018 08:32:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # true 00:19:40.018 08:32:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:40.018 Cannot find device "nvmf_tgt_br2" 00:19:40.018 08:32:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # true 00:19:40.018 08:32:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:40.018 08:32:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:40.018 08:32:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:40.018 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:40.018 08:32:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:19:40.018 08:32:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:40.018 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:40.018 08:32:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:19:40.018 08:32:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:40.018 08:32:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:40.018 08:32:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:40.018 08:32:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:40.018 08:32:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:40.018 08:32:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:40.018 08:32:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:40.018 08:32:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:40.018 08:32:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:40.018 08:32:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:40.018 08:32:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:40.018 08:32:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:40.018 08:32:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:40.018 08:32:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:40.278 08:32:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:40.278 08:32:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:40.278 08:32:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:40.278 08:32:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:40.278 08:32:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:40.278 08:32:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:40.278 08:32:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:40.278 08:32:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:40.278 08:32:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:40.278 08:32:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:40.278 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:40.278 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:19:40.278 00:19:40.278 --- 10.0.0.2 ping statistics --- 00:19:40.278 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:40.278 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:19:40.278 08:32:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:40.278 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:40.278 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:19:40.278 00:19:40.278 --- 10.0.0.3 ping statistics --- 00:19:40.278 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:40.278 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:19:40.278 08:32:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:40.278 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:40.278 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms 00:19:40.278 00:19:40.278 --- 10.0.0.1 ping statistics --- 00:19:40.278 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:40.278 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:19:40.278 08:32:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:40.278 08:32:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@433 -- # return 0 00:19:40.278 08:32:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:40.278 08:32:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:40.278 08:32:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:40.278 08:32:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:40.278 08:32:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:40.278 08:32:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:40.278 08:32:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:40.278 08:32:32 nvmf_tcp.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:19:40.278 08:32:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:40.278 08:32:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:40.278 08:32:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:40.278 08:32:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@481 -- # nvmfpid=82260 00:19:40.278 08:32:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:19:40.278 08:32:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@482 -- # waitforlisten 82260 00:19:40.278 08:32:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 82260 ']' 00:19:40.278 08:32:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:40.278 08:32:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:40.278 08:32:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:40.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:40.278 08:32:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:40.278 08:32:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:40.278 [2024-07-15 08:32:32.395873] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:19:40.278 [2024-07-15 08:32:32.395975] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:40.536 [2024-07-15 08:32:32.537994] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:40.536 [2024-07-15 08:32:32.664164] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:40.536 [2024-07-15 08:32:32.664227] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:40.536 [2024-07-15 08:32:32.664241] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:40.536 [2024-07-15 08:32:32.664251] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:40.536 [2024-07-15 08:32:32.664260] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:40.536 [2024-07-15 08:32:32.664428] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:40.536 [2024-07-15 08:32:32.664600] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:40.795 [2024-07-15 08:32:32.719474] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:41.358 08:32:33 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:41.358 08:32:33 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:19:41.358 08:32:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:41.358 08:32:33 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:41.358 08:32:33 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:41.358 08:32:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:41.358 08:32:33 nvmf_tcp.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:41.358 08:32:33 nvmf_tcp.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:41.615 [2024-07-15 08:32:33.677883] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:41.615 08:32:33 nvmf_tcp.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:19:41.872 Malloc0 00:19:41.872 08:32:33 nvmf_tcp.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:42.130 08:32:34 nvmf_tcp.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:42.387 08:32:34 nvmf_tcp.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:42.645 [2024-07-15 08:32:34.705035] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:42.645 08:32:34 nvmf_tcp.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=82309 00:19:42.645 08:32:34 nvmf_tcp.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:19:42.645 08:32:34 nvmf_tcp.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 82309 /var/tmp/bdevperf.sock 00:19:42.645 08:32:34 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 82309 ']' 00:19:42.645 08:32:34 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:42.645 08:32:34 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:42.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:42.646 08:32:34 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:42.646 08:32:34 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:42.646 08:32:34 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:42.646 [2024-07-15 08:32:34.772649] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:19:42.646 [2024-07-15 08:32:34.772754] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82309 ] 00:19:42.905 [2024-07-15 08:32:34.905941] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:42.905 [2024-07-15 08:32:35.038117] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:43.164 [2024-07-15 08:32:35.095674] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:43.729 08:32:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:43.729 08:32:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:19:43.729 08:32:35 nvmf_tcp.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:43.987 08:32:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:19:44.245 NVMe0n1 00:19:44.245 08:32:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=82337 00:19:44.245 08:32:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:44.245 08:32:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:19:44.505 Running I/O for 10 seconds... 00:19:45.438 08:32:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:45.697 [2024-07-15 08:32:37.628043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:69568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.697 [2024-07-15 08:32:37.628108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.697 [2024-07-15 08:32:37.628143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:69576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.697 [2024-07-15 08:32:37.628154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.697 [2024-07-15 08:32:37.628168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:69584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.697 [2024-07-15 08:32:37.628178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.697 [2024-07-15 08:32:37.628190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:69592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.697 [2024-07-15 08:32:37.628199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.697 [2024-07-15 08:32:37.628211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:69600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.697 [2024-07-15 08:32:37.628220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.697 [2024-07-15 08:32:37.628232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.697 [2024-07-15 08:32:37.628241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.697 [2024-07-15 08:32:37.628252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:69616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.697 [2024-07-15 08:32:37.628261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.697 [2024-07-15 08:32:37.628272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:69624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.697 [2024-07-15 08:32:37.628290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.697 [2024-07-15 08:32:37.628302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:69120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.697 [2024-07-15 08:32:37.628311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.697 [2024-07-15 08:32:37.628322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:69128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.697 [2024-07-15 08:32:37.628331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.697 [2024-07-15 08:32:37.628342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:69136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.697 [2024-07-15 08:32:37.628351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.697 [2024-07-15 08:32:37.628362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:69144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.697 [2024-07-15 08:32:37.628371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.697 [2024-07-15 08:32:37.628382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:69152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.697 [2024-07-15 08:32:37.628391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.697 [2024-07-15 08:32:37.628403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:69160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.698 [2024-07-15 08:32:37.628412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.698 [2024-07-15 08:32:37.628424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:69168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.698 [2024-07-15 08:32:37.628432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.698 [2024-07-15 08:32:37.628444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:69176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.698 [2024-07-15 08:32:37.628452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.698 [2024-07-15 08:32:37.628464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:69632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.698 [2024-07-15 08:32:37.628476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.698 [2024-07-15 08:32:37.628499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:69640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.698 [2024-07-15 08:32:37.628509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.698 [2024-07-15 08:32:37.628521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:69648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.698 [2024-07-15 08:32:37.628530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.698 [2024-07-15 08:32:37.628542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:69656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.698 [2024-07-15 08:32:37.628551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.698 [2024-07-15 08:32:37.628562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:69664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.698 [2024-07-15 08:32:37.628572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.698 [2024-07-15 08:32:37.628583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:69672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.698 [2024-07-15 08:32:37.628593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.698 [2024-07-15 08:32:37.628603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:69680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.698 [2024-07-15 08:32:37.628613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.698 [2024-07-15 08:32:37.628624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:69688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.698 [2024-07-15 08:32:37.628633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.698 [2024-07-15 08:32:37.628644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:69696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.698 [2024-07-15 08:32:37.628656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.698 [2024-07-15 08:32:37.628667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:69704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.698 [2024-07-15 08:32:37.628675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.698 [2024-07-15 08:32:37.628686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:69712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.698 [2024-07-15 08:32:37.628695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.698 [2024-07-15 08:32:37.628706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:69720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.698 [2024-07-15 08:32:37.628715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.698 [2024-07-15 08:32:37.628741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:69728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.698 [2024-07-15 08:32:37.628751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.698 [2024-07-15 08:32:37.628762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:69736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.698 [2024-07-15 08:32:37.628782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.698 [2024-07-15 08:32:37.628793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:69744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.698 [2024-07-15 08:32:37.628802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.698 [2024-07-15 08:32:37.628814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:69752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.698 [2024-07-15 08:32:37.628823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.698 [2024-07-15 08:32:37.628835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:69184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.698 [2024-07-15 08:32:37.628844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.698 [2024-07-15 08:32:37.628856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:69192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.698 [2024-07-15 08:32:37.628866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.698 [2024-07-15 08:32:37.628877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:69200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.698 [2024-07-15 08:32:37.628896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.698 [2024-07-15 08:32:37.628907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:69208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.698 [2024-07-15 08:32:37.628916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.698 [2024-07-15 08:32:37.628928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:69216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.698 [2024-07-15 08:32:37.628937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.698 [2024-07-15 08:32:37.628949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:69224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.698 [2024-07-15 08:32:37.628958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.698 [2024-07-15 08:32:37.628969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:69232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.698 [2024-07-15 08:32:37.628978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.698 [2024-07-15 08:32:37.628989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:69240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.698 [2024-07-15 08:32:37.628998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.698 [2024-07-15 08:32:37.629009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:69760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.698 [2024-07-15 08:32:37.629018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.698 [2024-07-15 08:32:37.629029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:69768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.698 [2024-07-15 08:32:37.629038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.698 [2024-07-15 08:32:37.629049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:69776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.698 [2024-07-15 08:32:37.629058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.698 [2024-07-15 08:32:37.629069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:69784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.698 [2024-07-15 08:32:37.629078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.698 [2024-07-15 08:32:37.629089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:69792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.698 [2024-07-15 08:32:37.629098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.698 [2024-07-15 08:32:37.629109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:69800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.698 [2024-07-15 08:32:37.629118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.698 [2024-07-15 08:32:37.629129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:69808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.698 [2024-07-15 08:32:37.629138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.698 [2024-07-15 08:32:37.629149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:69816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.698 [2024-07-15 08:32:37.629164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.698 [2024-07-15 08:32:37.629176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:69824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.698 [2024-07-15 08:32:37.629185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.698 [2024-07-15 08:32:37.629197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:69832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.698 [2024-07-15 08:32:37.629206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.698 [2024-07-15 08:32:37.629217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:69840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.698 [2024-07-15 08:32:37.629227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.698 [2024-07-15 08:32:37.629238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:69848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.698 [2024-07-15 08:32:37.629247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.698 [2024-07-15 08:32:37.629258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:69856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.698 [2024-07-15 08:32:37.629267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.698 [2024-07-15 08:32:37.629278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:69864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.698 [2024-07-15 08:32:37.629287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.698 [2024-07-15 08:32:37.629298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:69872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.698 [2024-07-15 08:32:37.629307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.698 [2024-07-15 08:32:37.629317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:69880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.698 [2024-07-15 08:32:37.629326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.698 [2024-07-15 08:32:37.629337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:69888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.698 [2024-07-15 08:32:37.629346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.698 [2024-07-15 08:32:37.629357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:69896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.698 [2024-07-15 08:32:37.629366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.698 [2024-07-15 08:32:37.629377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:69904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.698 [2024-07-15 08:32:37.629386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.698 [2024-07-15 08:32:37.629396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:69912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.698 [2024-07-15 08:32:37.629405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.698 [2024-07-15 08:32:37.629416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:69248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.698 [2024-07-15 08:32:37.629425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.698 [2024-07-15 08:32:37.629445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:69256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.698 [2024-07-15 08:32:37.629454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.698 [2024-07-15 08:32:37.629466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:69264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.698 [2024-07-15 08:32:37.629475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.698 [2024-07-15 08:32:37.629488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:69272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.698 [2024-07-15 08:32:37.629497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.698 [2024-07-15 08:32:37.629509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:69280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.698 [2024-07-15 08:32:37.629518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.698 [2024-07-15 08:32:37.629530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:69288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.698 [2024-07-15 08:32:37.629539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.698 [2024-07-15 08:32:37.629550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:69296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.698 [2024-07-15 08:32:37.629559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.698 [2024-07-15 08:32:37.629570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:69304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.698 [2024-07-15 08:32:37.629579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.698 [2024-07-15 08:32:37.629590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:69312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.698 [2024-07-15 08:32:37.629599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.698 [2024-07-15 08:32:37.629610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:69320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.698 [2024-07-15 08:32:37.629619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.698 [2024-07-15 08:32:37.629630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:69328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.698 [2024-07-15 08:32:37.629639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.698 [2024-07-15 08:32:37.629650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:69336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.698 [2024-07-15 08:32:37.629659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.698 [2024-07-15 08:32:37.629670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:69344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.698 [2024-07-15 08:32:37.629679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.698 [2024-07-15 08:32:37.629691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:69352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.698 [2024-07-15 08:32:37.629700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.698 [2024-07-15 08:32:37.629711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:69360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.698 [2024-07-15 08:32:37.629732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.698 [2024-07-15 08:32:37.629745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:69368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.698 [2024-07-15 08:32:37.629754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.698 [2024-07-15 08:32:37.629765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:69920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.698 [2024-07-15 08:32:37.629780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.698 [2024-07-15 08:32:37.629791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:69928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.698 [2024-07-15 08:32:37.629800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.698 [2024-07-15 08:32:37.629811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:69936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.698 [2024-07-15 08:32:37.629820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.698 [2024-07-15 08:32:37.629841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:69944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.698 [2024-07-15 08:32:37.629851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.698 [2024-07-15 08:32:37.629863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:69952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.698 [2024-07-15 08:32:37.629872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.698 [2024-07-15 08:32:37.629883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:69960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.698 [2024-07-15 08:32:37.629892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.698 [2024-07-15 08:32:37.629903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:69968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.698 [2024-07-15 08:32:37.629912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.698 [2024-07-15 08:32:37.629923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:69976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.698 [2024-07-15 08:32:37.629932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.699 [2024-07-15 08:32:37.629943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:69984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.699 [2024-07-15 08:32:37.629952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.699 [2024-07-15 08:32:37.629963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:69992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.699 [2024-07-15 08:32:37.629972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.699 [2024-07-15 08:32:37.629983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:70000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.699 [2024-07-15 08:32:37.629992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.699 [2024-07-15 08:32:37.630003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:70008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.699 [2024-07-15 08:32:37.630013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.699 [2024-07-15 08:32:37.630024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:70016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.699 [2024-07-15 08:32:37.630033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.699 [2024-07-15 08:32:37.630044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:70024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.699 [2024-07-15 08:32:37.630052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.699 [2024-07-15 08:32:37.630063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:69376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.699 [2024-07-15 08:32:37.630072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.699 [2024-07-15 08:32:37.630083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:69384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.699 [2024-07-15 08:32:37.630092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.699 [2024-07-15 08:32:37.630103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:69392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.699 [2024-07-15 08:32:37.630112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.699 [2024-07-15 08:32:37.630123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:69400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.699 [2024-07-15 08:32:37.630132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.699 [2024-07-15 08:32:37.630144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:69408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.699 [2024-07-15 08:32:37.630153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.699 [2024-07-15 08:32:37.630170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:69416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.699 [2024-07-15 08:32:37.630185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.699 [2024-07-15 08:32:37.630196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:69424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.699 [2024-07-15 08:32:37.630212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.699 [2024-07-15 08:32:37.630223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:69432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.699 [2024-07-15 08:32:37.630232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.699 [2024-07-15 08:32:37.630243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:70032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.699 [2024-07-15 08:32:37.630252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.699 [2024-07-15 08:32:37.630263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:70040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.699 [2024-07-15 08:32:37.630272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.699 [2024-07-15 08:32:37.630283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:70048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.699 [2024-07-15 08:32:37.630292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.699 [2024-07-15 08:32:37.630302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:70056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.699 [2024-07-15 08:32:37.630311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.699 [2024-07-15 08:32:37.630322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:70064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.699 [2024-07-15 08:32:37.630331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.699 [2024-07-15 08:32:37.630342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:70072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.699 [2024-07-15 08:32:37.630351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.699 [2024-07-15 08:32:37.630362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:70080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.699 [2024-07-15 08:32:37.630371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.699 [2024-07-15 08:32:37.630382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:70088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.699 [2024-07-15 08:32:37.630391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.699 [2024-07-15 08:32:37.630401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:70096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.699 [2024-07-15 08:32:37.630410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.699 [2024-07-15 08:32:37.630421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:70104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.699 [2024-07-15 08:32:37.630430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.699 [2024-07-15 08:32:37.630440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:70112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.699 [2024-07-15 08:32:37.630449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.699 [2024-07-15 08:32:37.630467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:70120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.699 [2024-07-15 08:32:37.630475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.699 [2024-07-15 08:32:37.630486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:70128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.699 [2024-07-15 08:32:37.630495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.699 [2024-07-15 08:32:37.630510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:70136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.699 [2024-07-15 08:32:37.630524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.699 [2024-07-15 08:32:37.630535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:69440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.699 [2024-07-15 08:32:37.630545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.699 [2024-07-15 08:32:37.630556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:69448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.699 [2024-07-15 08:32:37.630565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.699 [2024-07-15 08:32:37.630576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:69456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.699 [2024-07-15 08:32:37.630586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.699 [2024-07-15 08:32:37.630597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:69464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.699 [2024-07-15 08:32:37.630606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.699 [2024-07-15 08:32:37.630616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:69472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.699 [2024-07-15 08:32:37.630625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.699 [2024-07-15 08:32:37.630636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:69480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.699 [2024-07-15 08:32:37.630645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.699 [2024-07-15 08:32:37.630656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:69488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.699 [2024-07-15 08:32:37.630665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.699 [2024-07-15 08:32:37.630675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:69496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.699 [2024-07-15 08:32:37.630684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.699 [2024-07-15 08:32:37.630695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:69504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.699 [2024-07-15 08:32:37.630704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.699 [2024-07-15 08:32:37.630715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:69512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.699 [2024-07-15 08:32:37.630735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.699 [2024-07-15 08:32:37.630746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:69520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.699 [2024-07-15 08:32:37.630755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.699 [2024-07-15 08:32:37.630767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:69528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.699 [2024-07-15 08:32:37.630776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.699 [2024-07-15 08:32:37.630787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:69536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.699 [2024-07-15 08:32:37.630796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.699 [2024-07-15 08:32:37.630807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:69544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.699 [2024-07-15 08:32:37.630816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.699 [2024-07-15 08:32:37.630827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:69552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.699 [2024-07-15 08:32:37.630836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.699 [2024-07-15 08:32:37.630851] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e64d0 is same with the state(5) to be set 00:19:45.699 [2024-07-15 08:32:37.630868] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:45.699 [2024-07-15 08:32:37.630877] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:45.699 [2024-07-15 08:32:37.630886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:69560 len:8 PRP1 0x0 PRP2 0x0 00:19:45.699 [2024-07-15 08:32:37.630895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.699 [2024-07-15 08:32:37.630956] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x7e64d0 was disconnected and freed. reset controller. 00:19:45.699 [2024-07-15 08:32:37.631231] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:45.699 [2024-07-15 08:32:37.631342] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x79bd40 (9): Bad file descriptor 00:19:45.699 [2024-07-15 08:32:37.631447] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:45.699 [2024-07-15 08:32:37.631468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bd40 with addr=10.0.0.2, port=4420 00:19:45.699 [2024-07-15 08:32:37.631479] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79bd40 is same with the state(5) to be set 00:19:45.699 [2024-07-15 08:32:37.631496] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x79bd40 (9): Bad file descriptor 00:19:45.699 [2024-07-15 08:32:37.631512] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:45.699 [2024-07-15 08:32:37.631531] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:45.699 [2024-07-15 08:32:37.631542] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:45.699 [2024-07-15 08:32:37.631562] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:45.699 [2024-07-15 08:32:37.631573] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:45.699 08:32:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:19:47.684 [2024-07-15 08:32:39.631869] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:47.684 [2024-07-15 08:32:39.631955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bd40 with addr=10.0.0.2, port=4420 00:19:47.684 [2024-07-15 08:32:39.631973] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79bd40 is same with the state(5) to be set 00:19:47.684 [2024-07-15 08:32:39.632001] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x79bd40 (9): Bad file descriptor 00:19:47.684 [2024-07-15 08:32:39.632035] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:47.684 [2024-07-15 08:32:39.632047] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:47.684 [2024-07-15 08:32:39.632059] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:47.684 [2024-07-15 08:32:39.632088] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:47.684 [2024-07-15 08:32:39.632099] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:47.684 08:32:39 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:19:47.684 08:32:39 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:47.684 08:32:39 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:19:47.941 08:32:40 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:19:47.941 08:32:40 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:19:47.941 08:32:40 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:19:47.941 08:32:40 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:19:48.507 08:32:40 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:19:48.507 08:32:40 nvmf_tcp.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:19:49.883 [2024-07-15 08:32:41.632298] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:49.883 [2024-07-15 08:32:41.632366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79bd40 with addr=10.0.0.2, port=4420 00:19:49.883 [2024-07-15 08:32:41.632383] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79bd40 is same with the state(5) to be set 00:19:49.883 [2024-07-15 08:32:41.632409] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x79bd40 (9): Bad file descriptor 00:19:49.883 [2024-07-15 08:32:41.632429] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:49.883 [2024-07-15 08:32:41.632439] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:49.883 [2024-07-15 08:32:41.632450] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:49.883 [2024-07-15 08:32:41.632477] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:49.883 [2024-07-15 08:32:41.632489] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:51.778 [2024-07-15 08:32:43.632644] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:51.778 [2024-07-15 08:32:43.632736] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:51.778 [2024-07-15 08:32:43.632750] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:51.778 [2024-07-15 08:32:43.632761] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:19:51.778 [2024-07-15 08:32:43.632789] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:52.770 00:19:52.770 Latency(us) 00:19:52.770 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:52.770 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:52.770 Verification LBA range: start 0x0 length 0x4000 00:19:52.770 NVMe0n1 : 8.10 1066.87 4.17 15.81 0.00 118059.40 3872.58 7015926.69 00:19:52.770 =================================================================================================================== 00:19:52.770 Total : 1066.87 4.17 15.81 0.00 118059.40 3872.58 7015926.69 00:19:52.770 0 00:19:53.336 08:32:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:19:53.336 08:32:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:53.336 08:32:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:19:53.902 08:32:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:19:53.902 08:32:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:19:53.902 08:32:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:19:53.902 08:32:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:19:53.902 08:32:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:19:53.902 08:32:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@65 -- # wait 82337 00:19:53.902 08:32:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 82309 00:19:53.902 08:32:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 82309 ']' 00:19:53.902 08:32:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 82309 00:19:53.902 08:32:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:19:53.902 08:32:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:53.902 08:32:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82309 00:19:53.902 08:32:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:53.902 killing process with pid 82309 00:19:53.902 08:32:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:53.902 08:32:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82309' 00:19:53.902 Received shutdown signal, test time was about 9.514199 seconds 00:19:53.902 00:19:53.903 Latency(us) 00:19:53.903 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:53.903 =================================================================================================================== 00:19:53.903 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:53.903 08:32:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 82309 00:19:53.903 08:32:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 82309 00:19:54.161 08:32:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:54.419 [2024-07-15 08:32:46.481544] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:54.419 08:32:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=82449 00:19:54.419 08:32:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:19:54.419 08:32:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 82449 /var/tmp/bdevperf.sock 00:19:54.419 08:32:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 82449 ']' 00:19:54.419 08:32:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:54.419 08:32:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:54.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:54.419 08:32:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:54.419 08:32:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:54.419 08:32:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:54.419 [2024-07-15 08:32:46.543561] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:19:54.419 [2024-07-15 08:32:46.543660] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82449 ] 00:19:54.677 [2024-07-15 08:32:46.679067] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:54.677 [2024-07-15 08:32:46.798434] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:54.677 [2024-07-15 08:32:46.851350] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:55.687 08:32:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:55.687 08:32:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:19:55.687 08:32:47 nvmf_tcp.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:55.945 08:32:47 nvmf_tcp.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:19:56.202 NVMe0n1 00:19:56.202 08:32:48 nvmf_tcp.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=82478 00:19:56.202 08:32:48 nvmf_tcp.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:56.202 08:32:48 nvmf_tcp.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:19:56.202 Running I/O for 10 seconds... 00:19:57.136 08:32:49 nvmf_tcp.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:57.397 [2024-07-15 08:32:49.461748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:64384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.398 [2024-07-15 08:32:49.461815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.398 [2024-07-15 08:32:49.461840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:64392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.398 [2024-07-15 08:32:49.461851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.398 [2024-07-15 08:32:49.461864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:64400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.398 [2024-07-15 08:32:49.461873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.398 [2024-07-15 08:32:49.461884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:64408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.398 [2024-07-15 08:32:49.461894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.398 [2024-07-15 08:32:49.461905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:64416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.398 [2024-07-15 08:32:49.461915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.398 [2024-07-15 08:32:49.461926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:64424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.398 [2024-07-15 08:32:49.461935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.398 [2024-07-15 08:32:49.461946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:64432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.398 [2024-07-15 08:32:49.461955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.398 [2024-07-15 08:32:49.461966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:64440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.398 [2024-07-15 08:32:49.461975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.398 [2024-07-15 08:32:49.461986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:64448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.398 [2024-07-15 08:32:49.461995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.398 [2024-07-15 08:32:49.462006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:64456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.398 [2024-07-15 08:32:49.462015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.398 [2024-07-15 08:32:49.462026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:64464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.398 [2024-07-15 08:32:49.462035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.398 [2024-07-15 08:32:49.462046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:64472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.398 [2024-07-15 08:32:49.462055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.398 [2024-07-15 08:32:49.462066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:64480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.398 [2024-07-15 08:32:49.462075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.398 [2024-07-15 08:32:49.462086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:64488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.398 [2024-07-15 08:32:49.462095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.398 [2024-07-15 08:32:49.462106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:64496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.398 [2024-07-15 08:32:49.462115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.398 [2024-07-15 08:32:49.462126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:64504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.398 [2024-07-15 08:32:49.462135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.398 [2024-07-15 08:32:49.462146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:64000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.398 [2024-07-15 08:32:49.462155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.398 [2024-07-15 08:32:49.462179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:64008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.398 [2024-07-15 08:32:49.462189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.398 [2024-07-15 08:32:49.462201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:64016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.398 [2024-07-15 08:32:49.462210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.398 [2024-07-15 08:32:49.462221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:64024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.398 [2024-07-15 08:32:49.462230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.398 [2024-07-15 08:32:49.462242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:64032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.398 [2024-07-15 08:32:49.462251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.398 [2024-07-15 08:32:49.462262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:64040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.398 [2024-07-15 08:32:49.462270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.398 [2024-07-15 08:32:49.462281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:64048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.398 [2024-07-15 08:32:49.462290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.398 [2024-07-15 08:32:49.462301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:64056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.398 [2024-07-15 08:32:49.462310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.398 [2024-07-15 08:32:49.462321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:64512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.398 [2024-07-15 08:32:49.462330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.398 [2024-07-15 08:32:49.462341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:64520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.398 [2024-07-15 08:32:49.462349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.398 [2024-07-15 08:32:49.462373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:64528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.398 [2024-07-15 08:32:49.462382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.398 [2024-07-15 08:32:49.462393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:64536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.398 [2024-07-15 08:32:49.462402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.398 [2024-07-15 08:32:49.462413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:64544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.398 [2024-07-15 08:32:49.462422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.398 [2024-07-15 08:32:49.462433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:64552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.398 [2024-07-15 08:32:49.462441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.398 [2024-07-15 08:32:49.462452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:64560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.398 [2024-07-15 08:32:49.462461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.398 [2024-07-15 08:32:49.462472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:64568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.398 [2024-07-15 08:32:49.462481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.398 [2024-07-15 08:32:49.462491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:64576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.398 [2024-07-15 08:32:49.462500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.398 [2024-07-15 08:32:49.462512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:64584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.398 [2024-07-15 08:32:49.462521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.398 [2024-07-15 08:32:49.462533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:64592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.398 [2024-07-15 08:32:49.462542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.398 [2024-07-15 08:32:49.462553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:64600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.398 [2024-07-15 08:32:49.462562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.398 [2024-07-15 08:32:49.462573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:64608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.398 [2024-07-15 08:32:49.462582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.398 [2024-07-15 08:32:49.462592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:64616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.398 [2024-07-15 08:32:49.462601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.398 [2024-07-15 08:32:49.462612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:64624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.398 [2024-07-15 08:32:49.462622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.398 [2024-07-15 08:32:49.462633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:64632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.398 [2024-07-15 08:32:49.462642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.398 [2024-07-15 08:32:49.462653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:64064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.398 [2024-07-15 08:32:49.462662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.398 [2024-07-15 08:32:49.462673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:64072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.398 [2024-07-15 08:32:49.462682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.398 [2024-07-15 08:32:49.462693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:64080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.399 [2024-07-15 08:32:49.462701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.399 [2024-07-15 08:32:49.462712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:64088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.399 [2024-07-15 08:32:49.462734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.399 [2024-07-15 08:32:49.462747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:64096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.399 [2024-07-15 08:32:49.462756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.399 [2024-07-15 08:32:49.462767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:64104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.399 [2024-07-15 08:32:49.462776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.399 [2024-07-15 08:32:49.462787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:64112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.399 [2024-07-15 08:32:49.462796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.399 [2024-07-15 08:32:49.462807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:64120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.399 [2024-07-15 08:32:49.462816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.399 [2024-07-15 08:32:49.462827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:64640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.399 [2024-07-15 08:32:49.462835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.399 [2024-07-15 08:32:49.462847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:64648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.399 [2024-07-15 08:32:49.462857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.399 [2024-07-15 08:32:49.462868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:64656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.399 [2024-07-15 08:32:49.462877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.399 [2024-07-15 08:32:49.462888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:64664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.399 [2024-07-15 08:32:49.462897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.399 [2024-07-15 08:32:49.462909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:64672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.399 [2024-07-15 08:32:49.462919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.399 [2024-07-15 08:32:49.462930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:64680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.399 [2024-07-15 08:32:49.462939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.399 [2024-07-15 08:32:49.462950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:64688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.399 [2024-07-15 08:32:49.462959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.399 [2024-07-15 08:32:49.462970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.399 [2024-07-15 08:32:49.462979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.399 [2024-07-15 08:32:49.462990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:64704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.399 [2024-07-15 08:32:49.462999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.399 [2024-07-15 08:32:49.463009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:64712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.399 [2024-07-15 08:32:49.463018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.399 [2024-07-15 08:32:49.463029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:64720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.399 [2024-07-15 08:32:49.463038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.399 [2024-07-15 08:32:49.463048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:64728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.399 [2024-07-15 08:32:49.463057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.399 [2024-07-15 08:32:49.463068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:64736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.399 [2024-07-15 08:32:49.463078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.399 [2024-07-15 08:32:49.463088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:64744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.399 [2024-07-15 08:32:49.463097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.399 [2024-07-15 08:32:49.463108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:64752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.399 [2024-07-15 08:32:49.463117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.399 [2024-07-15 08:32:49.463128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:64760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.399 [2024-07-15 08:32:49.463136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.399 [2024-07-15 08:32:49.463147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:64768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.399 [2024-07-15 08:32:49.463157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.399 [2024-07-15 08:32:49.463169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:64776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.399 [2024-07-15 08:32:49.463178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.399 [2024-07-15 08:32:49.463190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:64784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.399 [2024-07-15 08:32:49.463199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.399 [2024-07-15 08:32:49.463210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:64792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.399 [2024-07-15 08:32:49.463219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.399 [2024-07-15 08:32:49.463230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:64128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.399 [2024-07-15 08:32:49.463239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.399 [2024-07-15 08:32:49.463262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:64136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.399 [2024-07-15 08:32:49.463271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.399 [2024-07-15 08:32:49.463288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:64144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.399 [2024-07-15 08:32:49.463297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.399 [2024-07-15 08:32:49.463308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:64152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.399 [2024-07-15 08:32:49.463317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.399 [2024-07-15 08:32:49.463328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:64160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.399 [2024-07-15 08:32:49.463337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.399 [2024-07-15 08:32:49.463348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:64168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.399 [2024-07-15 08:32:49.463357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.399 [2024-07-15 08:32:49.463368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:64176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.399 [2024-07-15 08:32:49.463377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.399 [2024-07-15 08:32:49.463388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:64184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.399 [2024-07-15 08:32:49.463397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.399 [2024-07-15 08:32:49.463407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:64192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.399 [2024-07-15 08:32:49.463416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.399 [2024-07-15 08:32:49.463426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:64200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.399 [2024-07-15 08:32:49.463435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.399 [2024-07-15 08:32:49.463446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:64208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.399 [2024-07-15 08:32:49.463455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.399 [2024-07-15 08:32:49.463466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:64216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.399 [2024-07-15 08:32:49.463474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.399 [2024-07-15 08:32:49.463486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:64224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.399 [2024-07-15 08:32:49.463496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.399 [2024-07-15 08:32:49.463515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:64232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.399 [2024-07-15 08:32:49.463524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.399 [2024-07-15 08:32:49.463536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:64240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.399 [2024-07-15 08:32:49.463546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.399 [2024-07-15 08:32:49.463557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:64248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.399 [2024-07-15 08:32:49.463567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.399 [2024-07-15 08:32:49.463578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:64800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.400 [2024-07-15 08:32:49.463587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.400 [2024-07-15 08:32:49.463598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:64808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.400 [2024-07-15 08:32:49.463607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.400 [2024-07-15 08:32:49.463618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:64816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.400 [2024-07-15 08:32:49.463627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.400 [2024-07-15 08:32:49.463638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:64824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.400 [2024-07-15 08:32:49.463647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.400 [2024-07-15 08:32:49.463658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:64832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.400 [2024-07-15 08:32:49.463667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.400 [2024-07-15 08:32:49.463678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:64840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.400 [2024-07-15 08:32:49.463687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.400 [2024-07-15 08:32:49.463698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:64848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.400 [2024-07-15 08:32:49.463707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.400 [2024-07-15 08:32:49.463728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:64856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.400 [2024-07-15 08:32:49.463740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.400 [2024-07-15 08:32:49.463751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:64864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.400 [2024-07-15 08:32:49.463760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.400 [2024-07-15 08:32:49.463771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:64872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.400 [2024-07-15 08:32:49.463780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.400 [2024-07-15 08:32:49.463791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:64880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.400 [2024-07-15 08:32:49.463800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.400 [2024-07-15 08:32:49.463811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:64888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.400 [2024-07-15 08:32:49.463820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.400 [2024-07-15 08:32:49.463831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:64896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.400 [2024-07-15 08:32:49.463840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.400 [2024-07-15 08:32:49.463856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:64904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.400 [2024-07-15 08:32:49.463866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.400 [2024-07-15 08:32:49.463877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:64256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.400 [2024-07-15 08:32:49.463886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.400 [2024-07-15 08:32:49.463897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:64264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.400 [2024-07-15 08:32:49.463906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.400 [2024-07-15 08:32:49.463917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:64272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.400 [2024-07-15 08:32:49.463927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.400 [2024-07-15 08:32:49.463938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:64280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.400 [2024-07-15 08:32:49.463947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.400 [2024-07-15 08:32:49.463958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:64288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.400 [2024-07-15 08:32:49.463967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.400 [2024-07-15 08:32:49.463978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.400 [2024-07-15 08:32:49.463987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.400 [2024-07-15 08:32:49.463999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:64304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.400 [2024-07-15 08:32:49.464008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.400 [2024-07-15 08:32:49.464018] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60f4d0 is same with the state(5) to be set 00:19:57.400 [2024-07-15 08:32:49.464030] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.400 [2024-07-15 08:32:49.464038] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.400 [2024-07-15 08:32:49.464046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64312 len:8 PRP1 0x0 PRP2 0x0 00:19:57.400 [2024-07-15 08:32:49.464055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.400 [2024-07-15 08:32:49.464065] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.400 [2024-07-15 08:32:49.464073] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.400 [2024-07-15 08:32:49.464080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64912 len:8 PRP1 0x0 PRP2 0x0 00:19:57.400 [2024-07-15 08:32:49.464089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.400 [2024-07-15 08:32:49.464098] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.400 [2024-07-15 08:32:49.464106] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.400 [2024-07-15 08:32:49.464121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64920 len:8 PRP1 0x0 PRP2 0x0 00:19:57.400 [2024-07-15 08:32:49.464131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.400 [2024-07-15 08:32:49.464140] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.400 [2024-07-15 08:32:49.464147] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.400 [2024-07-15 08:32:49.464155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64928 len:8 PRP1 0x0 PRP2 0x0 00:19:57.400 [2024-07-15 08:32:49.464169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.400 [2024-07-15 08:32:49.464178] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.400 [2024-07-15 08:32:49.464185] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.400 [2024-07-15 08:32:49.464194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64936 len:8 PRP1 0x0 PRP2 0x0 00:19:57.400 [2024-07-15 08:32:49.464203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.400 [2024-07-15 08:32:49.464212] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.400 [2024-07-15 08:32:49.464219] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.400 [2024-07-15 08:32:49.464227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64944 len:8 PRP1 0x0 PRP2 0x0 00:19:57.400 [2024-07-15 08:32:49.464236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.400 [2024-07-15 08:32:49.464245] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.400 [2024-07-15 08:32:49.464252] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.400 [2024-07-15 08:32:49.464259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64952 len:8 PRP1 0x0 PRP2 0x0 00:19:57.400 [2024-07-15 08:32:49.464268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.400 [2024-07-15 08:32:49.464277] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.400 [2024-07-15 08:32:49.464283] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.400 [2024-07-15 08:32:49.464291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64960 len:8 PRP1 0x0 PRP2 0x0 00:19:57.400 [2024-07-15 08:32:49.464299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.400 [2024-07-15 08:32:49.464308] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.400 [2024-07-15 08:32:49.464315] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.400 [2024-07-15 08:32:49.464322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64968 len:8 PRP1 0x0 PRP2 0x0 00:19:57.400 [2024-07-15 08:32:49.464331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.400 [2024-07-15 08:32:49.464340] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.400 [2024-07-15 08:32:49.464346] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.400 [2024-07-15 08:32:49.464354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64976 len:8 PRP1 0x0 PRP2 0x0 00:19:57.400 [2024-07-15 08:32:49.464362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.400 [2024-07-15 08:32:49.464371] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.400 [2024-07-15 08:32:49.464378] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.400 [2024-07-15 08:32:49.464390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64984 len:8 PRP1 0x0 PRP2 0x0 00:19:57.400 [2024-07-15 08:32:49.464399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.400 [2024-07-15 08:32:49.464408] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.400 [2024-07-15 08:32:49.464415] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.400 [2024-07-15 08:32:49.464423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64992 len:8 PRP1 0x0 PRP2 0x0 00:19:57.400 [2024-07-15 08:32:49.464436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.400 [2024-07-15 08:32:49.464445] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.400 [2024-07-15 08:32:49.464453] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.400 [2024-07-15 08:32:49.464461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65000 len:8 PRP1 0x0 PRP2 0x0 00:19:57.400 [2024-07-15 08:32:49.464470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.401 [2024-07-15 08:32:49.464479] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.401 [2024-07-15 08:32:49.464486] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.401 [2024-07-15 08:32:49.464494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65008 len:8 PRP1 0x0 PRP2 0x0 00:19:57.401 [2024-07-15 08:32:49.464503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.401 [2024-07-15 08:32:49.464512] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.401 [2024-07-15 08:32:49.464519] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.401 [2024-07-15 08:32:49.464526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65016 len:8 PRP1 0x0 PRP2 0x0 00:19:57.401 [2024-07-15 08:32:49.464534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.401 [2024-07-15 08:32:49.464543] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.401 [2024-07-15 08:32:49.464550] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.401 [2024-07-15 08:32:49.464557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64320 len:8 PRP1 0x0 PRP2 0x0 00:19:57.401 [2024-07-15 08:32:49.464566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.401 [2024-07-15 08:32:49.464575] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.401 [2024-07-15 08:32:49.464582] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.401 [2024-07-15 08:32:49.464590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64328 len:8 PRP1 0x0 PRP2 0x0 00:19:57.401 [2024-07-15 08:32:49.464598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.401 [2024-07-15 08:32:49.464607] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.401 [2024-07-15 08:32:49.464614] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.401 [2024-07-15 08:32:49.464621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64336 len:8 PRP1 0x0 PRP2 0x0 00:19:57.401 [2024-07-15 08:32:49.464630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.401 [2024-07-15 08:32:49.464639] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.401 [2024-07-15 08:32:49.464646] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.401 [2024-07-15 08:32:49.464658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64344 len:8 PRP1 0x0 PRP2 0x0 00:19:57.401 [2024-07-15 08:32:49.464667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.401 [2024-07-15 08:32:49.464676] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.401 [2024-07-15 08:32:49.464683] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.401 [2024-07-15 08:32:49.464691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64352 len:8 PRP1 0x0 PRP2 0x0 00:19:57.401 [2024-07-15 08:32:49.464704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.401 [2024-07-15 08:32:49.464713] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.401 [2024-07-15 08:32:49.464734] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.401 [2024-07-15 08:32:49.464744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64360 len:8 PRP1 0x0 PRP2 0x0 00:19:57.401 [2024-07-15 08:32:49.464753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.401 [2024-07-15 08:32:49.464762] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.401 [2024-07-15 08:32:49.464769] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.401 [2024-07-15 08:32:49.464777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64368 len:8 PRP1 0x0 PRP2 0x0 00:19:57.401 [2024-07-15 08:32:49.464786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.401 [2024-07-15 08:32:49.464796] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.401 [2024-07-15 08:32:49.464803] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.401 [2024-07-15 08:32:49.464811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64376 len:8 PRP1 0x0 PRP2 0x0 00:19:57.401 [2024-07-15 08:32:49.464820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.401 [2024-07-15 08:32:49.464880] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x60f4d0 was disconnected and freed. reset controller. 00:19:57.401 [2024-07-15 08:32:49.465144] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:57.401 [2024-07-15 08:32:49.465262] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5c4d40 (9): Bad file descriptor 00:19:57.401 [2024-07-15 08:32:49.465436] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:57.401 [2024-07-15 08:32:49.465470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c4d40 with addr=10.0.0.2, port=4420 00:19:57.401 [2024-07-15 08:32:49.465488] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c4d40 is same with the state(5) to be set 00:19:57.401 [2024-07-15 08:32:49.465518] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5c4d40 (9): Bad file descriptor 00:19:57.401 [2024-07-15 08:32:49.465547] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:57.401 [2024-07-15 08:32:49.465564] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:57.401 [2024-07-15 08:32:49.465580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:57.401 [2024-07-15 08:32:49.465608] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:57.401 [2024-07-15 08:32:49.465629] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:57.401 08:32:49 nvmf_tcp.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:19:58.335 [2024-07-15 08:32:50.465803] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:58.335 [2024-07-15 08:32:50.465883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c4d40 with addr=10.0.0.2, port=4420 00:19:58.335 [2024-07-15 08:32:50.465900] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c4d40 is same with the state(5) to be set 00:19:58.335 [2024-07-15 08:32:50.465930] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5c4d40 (9): Bad file descriptor 00:19:58.335 [2024-07-15 08:32:50.465949] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:58.335 [2024-07-15 08:32:50.465959] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:58.335 [2024-07-15 08:32:50.465971] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:58.335 [2024-07-15 08:32:50.466000] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:58.335 [2024-07-15 08:32:50.466012] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:58.335 08:32:50 nvmf_tcp.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:58.592 [2024-07-15 08:32:50.739511] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:58.592 08:32:50 nvmf_tcp.nvmf_timeout -- host/timeout.sh@92 -- # wait 82478 00:19:59.525 [2024-07-15 08:32:51.483311] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:07.637 00:20:07.637 Latency(us) 00:20:07.637 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:07.637 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:07.637 Verification LBA range: start 0x0 length 0x4000 00:20:07.637 NVMe0n1 : 10.01 6258.31 24.45 0.00 0.00 20414.23 3202.33 3019898.88 00:20:07.637 =================================================================================================================== 00:20:07.637 Total : 6258.31 24.45 0.00 0.00 20414.23 3202.33 3019898.88 00:20:07.637 0 00:20:07.637 08:32:58 nvmf_tcp.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=82583 00:20:07.637 08:32:58 nvmf_tcp.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:07.637 08:32:58 nvmf_tcp.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:20:07.637 Running I/O for 10 seconds... 00:20:07.637 08:32:59 nvmf_tcp.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:07.637 [2024-07-15 08:32:59.593360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:59560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.637 [2024-07-15 08:32:59.593434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.637 [2024-07-15 08:32:59.593461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:59688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.637 [2024-07-15 08:32:59.593472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.637 [2024-07-15 08:32:59.593485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:59696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.637 [2024-07-15 08:32:59.593494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.637 [2024-07-15 08:32:59.593506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:59704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.637 [2024-07-15 08:32:59.593515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.637 [2024-07-15 08:32:59.593526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:59712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.637 [2024-07-15 08:32:59.593536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.637 [2024-07-15 08:32:59.593547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:59720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.637 [2024-07-15 08:32:59.593556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.637 [2024-07-15 08:32:59.593567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.637 [2024-07-15 08:32:59.593576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.637 [2024-07-15 08:32:59.593587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:59736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.637 [2024-07-15 08:32:59.593597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.637 [2024-07-15 08:32:59.593608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:59744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.637 [2024-07-15 08:32:59.593617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.638 [2024-07-15 08:32:59.593628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:59752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.638 [2024-07-15 08:32:59.593637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.638 [2024-07-15 08:32:59.593648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:59760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.638 [2024-07-15 08:32:59.593657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.638 [2024-07-15 08:32:59.593668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:59768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.638 [2024-07-15 08:32:59.593677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.638 [2024-07-15 08:32:59.593688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:59776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.638 [2024-07-15 08:32:59.593697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.638 [2024-07-15 08:32:59.593708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:59784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.638 [2024-07-15 08:32:59.593730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.638 [2024-07-15 08:32:59.593744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:59792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.638 [2024-07-15 08:32:59.593754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.638 [2024-07-15 08:32:59.593766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:59800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.638 [2024-07-15 08:32:59.593775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.638 [2024-07-15 08:32:59.593787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:59808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.638 [2024-07-15 08:32:59.593805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.638 [2024-07-15 08:32:59.593826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:59816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.638 [2024-07-15 08:32:59.593836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.638 [2024-07-15 08:32:59.593847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:59824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.638 [2024-07-15 08:32:59.593856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.638 [2024-07-15 08:32:59.593867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:59832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.638 [2024-07-15 08:32:59.593876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.638 [2024-07-15 08:32:59.593887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:59840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.638 [2024-07-15 08:32:59.593896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.638 [2024-07-15 08:32:59.593907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:59848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.638 [2024-07-15 08:32:59.593915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.638 [2024-07-15 08:32:59.593926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:59856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.638 [2024-07-15 08:32:59.593935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.638 [2024-07-15 08:32:59.593946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:59864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.638 [2024-07-15 08:32:59.593959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.638 [2024-07-15 08:32:59.593970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:59872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.638 [2024-07-15 08:32:59.593979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.638 [2024-07-15 08:32:59.593990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:59880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.638 [2024-07-15 08:32:59.593999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.638 [2024-07-15 08:32:59.594010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:59888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.638 [2024-07-15 08:32:59.594018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.638 [2024-07-15 08:32:59.594029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:59896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.638 [2024-07-15 08:32:59.594038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.638 [2024-07-15 08:32:59.594049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:59904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.638 [2024-07-15 08:32:59.594058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.638 [2024-07-15 08:32:59.594068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:59912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.638 [2024-07-15 08:32:59.594077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.638 [2024-07-15 08:32:59.594088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:59920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.638 [2024-07-15 08:32:59.594097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.638 [2024-07-15 08:32:59.594108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:59928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.638 [2024-07-15 08:32:59.594116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.638 [2024-07-15 08:32:59.594127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:59936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.638 [2024-07-15 08:32:59.594136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.638 [2024-07-15 08:32:59.594148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:59944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.638 [2024-07-15 08:32:59.594158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.638 [2024-07-15 08:32:59.594169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:59952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.638 [2024-07-15 08:32:59.594178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.638 [2024-07-15 08:32:59.594189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:59960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.638 [2024-07-15 08:32:59.594198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.638 [2024-07-15 08:32:59.594210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:59968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.638 [2024-07-15 08:32:59.594220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.638 [2024-07-15 08:32:59.594231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:59976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.638 [2024-07-15 08:32:59.594242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.638 [2024-07-15 08:32:59.594253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:59984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.638 [2024-07-15 08:32:59.594262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.638 [2024-07-15 08:32:59.594273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:59992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.638 [2024-07-15 08:32:59.594282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.638 [2024-07-15 08:32:59.594292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:60000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.638 [2024-07-15 08:32:59.594301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.638 [2024-07-15 08:32:59.594312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:60008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.638 [2024-07-15 08:32:59.594321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.638 [2024-07-15 08:32:59.594332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:60016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.638 [2024-07-15 08:32:59.594341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.638 [2024-07-15 08:32:59.594352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:60024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.638 [2024-07-15 08:32:59.594361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.638 [2024-07-15 08:32:59.594372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:60032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.638 [2024-07-15 08:32:59.594380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.638 [2024-07-15 08:32:59.594391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:60040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.638 [2024-07-15 08:32:59.594400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.638 [2024-07-15 08:32:59.594411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:60048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.638 [2024-07-15 08:32:59.594420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.638 [2024-07-15 08:32:59.594431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:60056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.638 [2024-07-15 08:32:59.594439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.638 [2024-07-15 08:32:59.594450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:60064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.639 [2024-07-15 08:32:59.594459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.639 [2024-07-15 08:32:59.594471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:60072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.639 [2024-07-15 08:32:59.594480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.639 [2024-07-15 08:32:59.594492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:60080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.639 [2024-07-15 08:32:59.594502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.639 [2024-07-15 08:32:59.594513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:60088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.639 [2024-07-15 08:32:59.594522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.639 [2024-07-15 08:32:59.594533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:60096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.639 [2024-07-15 08:32:59.594542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.639 [2024-07-15 08:32:59.594553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:60104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.639 [2024-07-15 08:32:59.594562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.639 [2024-07-15 08:32:59.594573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:60112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.639 [2024-07-15 08:32:59.594582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.639 [2024-07-15 08:32:59.594592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:60120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.639 [2024-07-15 08:32:59.594601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.639 [2024-07-15 08:32:59.594612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:60128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.639 [2024-07-15 08:32:59.594621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.639 [2024-07-15 08:32:59.594632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:60136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.639 [2024-07-15 08:32:59.594641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.639 [2024-07-15 08:32:59.594651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:60144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.639 [2024-07-15 08:32:59.594660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.639 [2024-07-15 08:32:59.594671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:60152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.639 [2024-07-15 08:32:59.594680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.639 [2024-07-15 08:32:59.594690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:60160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.639 [2024-07-15 08:32:59.594699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.639 [2024-07-15 08:32:59.594710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:60168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.639 [2024-07-15 08:32:59.594729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.639 [2024-07-15 08:32:59.594741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:60176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.639 [2024-07-15 08:32:59.594750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.639 [2024-07-15 08:32:59.594761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:60184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.639 [2024-07-15 08:32:59.594770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.639 [2024-07-15 08:32:59.594781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:60192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.639 [2024-07-15 08:32:59.594790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.639 [2024-07-15 08:32:59.594802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:60200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.639 [2024-07-15 08:32:59.594813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.639 [2024-07-15 08:32:59.594824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:60208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.639 [2024-07-15 08:32:59.594834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.639 [2024-07-15 08:32:59.594845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:60216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.639 [2024-07-15 08:32:59.594854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.639 [2024-07-15 08:32:59.594865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:60224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.639 [2024-07-15 08:32:59.594874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.639 [2024-07-15 08:32:59.594885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:60232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.639 [2024-07-15 08:32:59.594894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.639 [2024-07-15 08:32:59.594905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:60240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.639 [2024-07-15 08:32:59.594913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.639 [2024-07-15 08:32:59.594924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:60248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.639 [2024-07-15 08:32:59.594933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.639 [2024-07-15 08:32:59.594944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:60256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.639 [2024-07-15 08:32:59.594952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.639 [2024-07-15 08:32:59.594963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:60264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.639 [2024-07-15 08:32:59.594972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.639 [2024-07-15 08:32:59.594983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:60272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.639 [2024-07-15 08:32:59.594992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.639 [2024-07-15 08:32:59.595003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:60280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.639 [2024-07-15 08:32:59.595011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.639 [2024-07-15 08:32:59.595022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:60288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.639 [2024-07-15 08:32:59.595031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.639 [2024-07-15 08:32:59.595042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:60296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.639 [2024-07-15 08:32:59.595050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.639 [2024-07-15 08:32:59.595061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:60304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.639 [2024-07-15 08:32:59.595070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.639 [2024-07-15 08:32:59.595081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:60312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.639 [2024-07-15 08:32:59.595089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.639 [2024-07-15 08:32:59.595100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:60320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.639 [2024-07-15 08:32:59.595109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.639 [2024-07-15 08:32:59.595122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:60328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.639 [2024-07-15 08:32:59.595132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.639 [2024-07-15 08:32:59.595143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:60336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.639 [2024-07-15 08:32:59.595153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.639 [2024-07-15 08:32:59.595164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:60344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.640 [2024-07-15 08:32:59.595173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.640 [2024-07-15 08:32:59.595184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:60352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.640 [2024-07-15 08:32:59.595193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.640 [2024-07-15 08:32:59.595205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:60360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.640 [2024-07-15 08:32:59.595214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.640 [2024-07-15 08:32:59.595225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:60368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.640 [2024-07-15 08:32:59.595234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.640 [2024-07-15 08:32:59.595256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:60376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.640 [2024-07-15 08:32:59.595266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.640 [2024-07-15 08:32:59.595277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:60384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.640 [2024-07-15 08:32:59.595286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.640 [2024-07-15 08:32:59.595297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:60392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.640 [2024-07-15 08:32:59.595306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.640 [2024-07-15 08:32:59.595317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:60400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.640 [2024-07-15 08:32:59.595325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.640 [2024-07-15 08:32:59.595336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:60408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.640 [2024-07-15 08:32:59.595345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.640 [2024-07-15 08:32:59.595355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:60416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.640 [2024-07-15 08:32:59.595364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.640 [2024-07-15 08:32:59.595375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:60424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.640 [2024-07-15 08:32:59.595384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.640 [2024-07-15 08:32:59.595394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:60432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.640 [2024-07-15 08:32:59.595403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.640 [2024-07-15 08:32:59.595414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:60440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.640 [2024-07-15 08:32:59.595423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.640 [2024-07-15 08:32:59.595434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:60448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.640 [2024-07-15 08:32:59.595443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.640 [2024-07-15 08:32:59.595472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:60456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.640 [2024-07-15 08:32:59.595482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.640 [2024-07-15 08:32:59.595494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:60464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.640 [2024-07-15 08:32:59.595504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.640 [2024-07-15 08:32:59.595515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:60472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.640 [2024-07-15 08:32:59.595528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.640 [2024-07-15 08:32:59.595539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:60480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.640 [2024-07-15 08:32:59.595550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.640 [2024-07-15 08:32:59.595561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:60488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.640 [2024-07-15 08:32:59.595570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.640 [2024-07-15 08:32:59.595581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:60496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.640 [2024-07-15 08:32:59.595590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.640 [2024-07-15 08:32:59.595601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:60504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.640 [2024-07-15 08:32:59.595609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.640 [2024-07-15 08:32:59.595620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:60512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.640 [2024-07-15 08:32:59.595629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.640 [2024-07-15 08:32:59.595640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:60520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.640 [2024-07-15 08:32:59.595648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.640 [2024-07-15 08:32:59.595659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:60528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.640 [2024-07-15 08:32:59.595668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.640 [2024-07-15 08:32:59.595678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:60536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.640 [2024-07-15 08:32:59.595687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.640 [2024-07-15 08:32:59.595698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:60544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.640 [2024-07-15 08:32:59.595706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.640 [2024-07-15 08:32:59.595717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:60552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.640 [2024-07-15 08:32:59.595737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.640 [2024-07-15 08:32:59.595749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:60560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.640 [2024-07-15 08:32:59.595758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.640 [2024-07-15 08:32:59.595769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:59568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.640 [2024-07-15 08:32:59.595778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.640 [2024-07-15 08:32:59.595789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:59576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.640 [2024-07-15 08:32:59.595798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.640 [2024-07-15 08:32:59.595815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:59584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.640 [2024-07-15 08:32:59.595825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.640 [2024-07-15 08:32:59.595836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:59592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.640 [2024-07-15 08:32:59.595845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.640 [2024-07-15 08:32:59.595856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:59600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.640 [2024-07-15 08:32:59.595864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.640 [2024-07-15 08:32:59.595875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:59608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.640 [2024-07-15 08:32:59.595884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.640 [2024-07-15 08:32:59.595895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:59616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.640 [2024-07-15 08:32:59.595904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.640 [2024-07-15 08:32:59.595914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:59624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.640 [2024-07-15 08:32:59.595923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.640 [2024-07-15 08:32:59.595934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:59632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.640 [2024-07-15 08:32:59.595943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.640 [2024-07-15 08:32:59.595953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:59640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.641 [2024-07-15 08:32:59.595962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.641 [2024-07-15 08:32:59.595973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:59648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.641 [2024-07-15 08:32:59.595982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.641 [2024-07-15 08:32:59.595993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:59656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.641 [2024-07-15 08:32:59.596001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.641 [2024-07-15 08:32:59.596012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:59664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.641 [2024-07-15 08:32:59.596021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.641 [2024-07-15 08:32:59.596032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:59672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.641 [2024-07-15 08:32:59.596040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.641 [2024-07-15 08:32:59.596051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:59680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.641 [2024-07-15 08:32:59.596060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.641 [2024-07-15 08:32:59.596071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:60568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:07.641 [2024-07-15 08:32:59.596080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.641 [2024-07-15 08:32:59.596091] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60dca0 is same with the state(5) to be set 00:20:07.641 [2024-07-15 08:32:59.596106] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:07.641 [2024-07-15 08:32:59.596114] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:07.641 [2024-07-15 08:32:59.596123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60576 len:8 PRP1 0x0 PRP2 0x0 00:20:07.641 [2024-07-15 08:32:59.596136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.641 [2024-07-15 08:32:59.596192] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x60dca0 was disconnected and freed. reset controller. 00:20:07.641 [2024-07-15 08:32:59.596419] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:07.641 [2024-07-15 08:32:59.596499] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5c4d40 (9): Bad file descriptor 00:20:07.641 [2024-07-15 08:32:59.596599] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.641 [2024-07-15 08:32:59.596620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c4d40 with addr=10.0.0.2, port=4420 00:20:07.641 [2024-07-15 08:32:59.596631] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c4d40 is same with the state(5) to be set 00:20:07.641 [2024-07-15 08:32:59.596649] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5c4d40 (9): Bad file descriptor 00:20:07.641 [2024-07-15 08:32:59.596665] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:07.641 [2024-07-15 08:32:59.596674] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:07.641 [2024-07-15 08:32:59.596685] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:07.641 [2024-07-15 08:32:59.596713] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.641 [2024-07-15 08:32:59.596740] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:07.641 08:32:59 nvmf_tcp.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:20:08.576 [2024-07-15 08:33:00.596893] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.577 [2024-07-15 08:33:00.596970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c4d40 with addr=10.0.0.2, port=4420 00:20:08.577 [2024-07-15 08:33:00.596987] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c4d40 is same with the state(5) to be set 00:20:08.577 [2024-07-15 08:33:00.597014] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5c4d40 (9): Bad file descriptor 00:20:08.577 [2024-07-15 08:33:00.597033] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:08.577 [2024-07-15 08:33:00.597044] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:08.577 [2024-07-15 08:33:00.597054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:08.577 [2024-07-15 08:33:00.597082] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.577 [2024-07-15 08:33:00.597094] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:09.514 [2024-07-15 08:33:01.597220] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.514 [2024-07-15 08:33:01.597276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c4d40 with addr=10.0.0.2, port=4420 00:20:09.514 [2024-07-15 08:33:01.597292] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c4d40 is same with the state(5) to be set 00:20:09.514 [2024-07-15 08:33:01.597316] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5c4d40 (9): Bad file descriptor 00:20:09.514 [2024-07-15 08:33:01.597335] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:09.514 [2024-07-15 08:33:01.597346] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:09.514 [2024-07-15 08:33:01.597356] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:09.514 [2024-07-15 08:33:01.597383] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.514 [2024-07-15 08:33:01.597394] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:10.461 [2024-07-15 08:33:02.601167] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.461 [2024-07-15 08:33:02.601251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c4d40 with addr=10.0.0.2, port=4420 00:20:10.461 [2024-07-15 08:33:02.601267] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c4d40 is same with the state(5) to be set 00:20:10.461 [2024-07-15 08:33:02.601515] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5c4d40 (9): Bad file descriptor 00:20:10.461 [2024-07-15 08:33:02.601767] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:10.461 [2024-07-15 08:33:02.601781] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:10.461 [2024-07-15 08:33:02.601793] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:10.461 [2024-07-15 08:33:02.605462] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.461 [2024-07-15 08:33:02.605499] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:10.461 08:33:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:11.024 [2024-07-15 08:33:02.918552] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:11.024 08:33:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@103 -- # wait 82583 00:20:11.643 [2024-07-15 08:33:03.643824] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:16.909 00:20:16.909 Latency(us) 00:20:16.909 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:16.909 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:16.909 Verification LBA range: start 0x0 length 0x4000 00:20:16.909 NVMe0n1 : 10.01 5293.24 20.68 3888.52 0.00 13913.75 636.74 3019898.88 00:20:16.909 =================================================================================================================== 00:20:16.909 Total : 5293.24 20.68 3888.52 0.00 13913.75 0.00 3019898.88 00:20:16.909 0 00:20:16.909 08:33:08 nvmf_tcp.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 82449 00:20:16.909 08:33:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 82449 ']' 00:20:16.909 08:33:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 82449 00:20:16.909 08:33:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:20:16.909 08:33:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:16.909 08:33:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82449 00:20:16.909 killing process with pid 82449 00:20:16.909 Received shutdown signal, test time was about 10.000000 seconds 00:20:16.909 00:20:16.909 Latency(us) 00:20:16.909 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:16.909 =================================================================================================================== 00:20:16.910 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:16.910 08:33:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:16.910 08:33:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:16.910 08:33:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82449' 00:20:16.910 08:33:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 82449 00:20:16.910 08:33:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 82449 00:20:16.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:16.910 08:33:08 nvmf_tcp.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=82692 00:20:16.910 08:33:08 nvmf_tcp.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:20:16.910 08:33:08 nvmf_tcp.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 82692 /var/tmp/bdevperf.sock 00:20:16.910 08:33:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 82692 ']' 00:20:16.910 08:33:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:16.910 08:33:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:16.910 08:33:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:16.910 08:33:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:16.910 08:33:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:16.910 [2024-07-15 08:33:08.834397] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:20:16.910 [2024-07-15 08:33:08.835473] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82692 ] 00:20:16.910 [2024-07-15 08:33:08.970011] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:17.169 [2024-07-15 08:33:09.088311] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:17.169 [2024-07-15 08:33:09.141870] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:17.735 08:33:09 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:17.735 08:33:09 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:20:17.735 08:33:09 nvmf_tcp.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=82708 00:20:17.735 08:33:09 nvmf_tcp.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 82692 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:20:17.735 08:33:09 nvmf_tcp.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:20:17.993 08:33:10 nvmf_tcp.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:20:18.251 NVMe0n1 00:20:18.509 08:33:10 nvmf_tcp.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=82754 00:20:18.509 08:33:10 nvmf_tcp.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:18.509 08:33:10 nvmf_tcp.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:20:18.509 Running I/O for 10 seconds... 00:20:19.441 08:33:11 nvmf_tcp.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:19.702 [2024-07-15 08:33:11.721003] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.702 [2024-07-15 08:33:11.721093] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.702 [2024-07-15 08:33:11.721106] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.702 [2024-07-15 08:33:11.721116] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.702 [2024-07-15 08:33:11.721126] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.702 [2024-07-15 08:33:11.721135] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.702 [2024-07-15 08:33:11.721144] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.702 [2024-07-15 08:33:11.721153] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.702 [2024-07-15 08:33:11.721161] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.702 [2024-07-15 08:33:11.721170] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.702 [2024-07-15 08:33:11.721178] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.702 [2024-07-15 08:33:11.721186] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.702 [2024-07-15 08:33:11.721194] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.702 [2024-07-15 08:33:11.721202] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.702 [2024-07-15 08:33:11.721211] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.702 [2024-07-15 08:33:11.721220] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.702 [2024-07-15 08:33:11.721229] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.702 [2024-07-15 08:33:11.721238] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.702 [2024-07-15 08:33:11.721247] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.702 [2024-07-15 08:33:11.721256] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.702 [2024-07-15 08:33:11.721265] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.702 [2024-07-15 08:33:11.721274] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.702 [2024-07-15 08:33:11.721283] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.702 [2024-07-15 08:33:11.721291] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.702 [2024-07-15 08:33:11.721301] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.702 [2024-07-15 08:33:11.721309] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.702 [2024-07-15 08:33:11.721318] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.702 [2024-07-15 08:33:11.721326] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.702 [2024-07-15 08:33:11.721347] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.702 [2024-07-15 08:33:11.721357] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.702 [2024-07-15 08:33:11.721366] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.702 [2024-07-15 08:33:11.721375] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.702 [2024-07-15 08:33:11.721384] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.702 [2024-07-15 08:33:11.721394] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.702 [2024-07-15 08:33:11.721405] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.702 [2024-07-15 08:33:11.721414] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.702 [2024-07-15 08:33:11.721423] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.702 [2024-07-15 08:33:11.721432] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.702 [2024-07-15 08:33:11.721440] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.702 [2024-07-15 08:33:11.721449] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.702 [2024-07-15 08:33:11.721457] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.702 [2024-07-15 08:33:11.721466] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.702 [2024-07-15 08:33:11.721474] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.702 [2024-07-15 08:33:11.721482] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.702 [2024-07-15 08:33:11.721491] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.702 [2024-07-15 08:33:11.721499] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.702 [2024-07-15 08:33:11.721507] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.702 [2024-07-15 08:33:11.721515] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.702 [2024-07-15 08:33:11.721524] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.702 [2024-07-15 08:33:11.721532] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.702 [2024-07-15 08:33:11.721540] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.703 [2024-07-15 08:33:11.721548] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.703 [2024-07-15 08:33:11.721559] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.703 [2024-07-15 08:33:11.721567] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.703 [2024-07-15 08:33:11.721576] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.703 [2024-07-15 08:33:11.721586] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.703 [2024-07-15 08:33:11.721595] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.703 [2024-07-15 08:33:11.721604] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.703 [2024-07-15 08:33:11.721613] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.703 [2024-07-15 08:33:11.721621] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.703 [2024-07-15 08:33:11.721630] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.703 [2024-07-15 08:33:11.721639] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.703 [2024-07-15 08:33:11.721648] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.703 [2024-07-15 08:33:11.721656] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.703 [2024-07-15 08:33:11.721665] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.703 [2024-07-15 08:33:11.721673] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.703 [2024-07-15 08:33:11.721682] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.703 [2024-07-15 08:33:11.721690] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.703 [2024-07-15 08:33:11.721698] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.703 [2024-07-15 08:33:11.721706] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.703 [2024-07-15 08:33:11.721714] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.703 [2024-07-15 08:33:11.721744] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.703 [2024-07-15 08:33:11.721753] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.703 [2024-07-15 08:33:11.721762] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.703 [2024-07-15 08:33:11.721770] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.703 [2024-07-15 08:33:11.721779] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.703 [2024-07-15 08:33:11.721788] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.703 [2024-07-15 08:33:11.721797] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.703 [2024-07-15 08:33:11.721806] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.703 [2024-07-15 08:33:11.721814] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.703 [2024-07-15 08:33:11.721822] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.703 [2024-07-15 08:33:11.721831] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.703 [2024-07-15 08:33:11.721840] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.703 [2024-07-15 08:33:11.721849] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.703 [2024-07-15 08:33:11.721858] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.703 [2024-07-15 08:33:11.721867] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.703 [2024-07-15 08:33:11.721876] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.703 [2024-07-15 08:33:11.721885] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.703 [2024-07-15 08:33:11.721894] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.703 [2024-07-15 08:33:11.721903] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.703 [2024-07-15 08:33:11.721911] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.703 [2024-07-15 08:33:11.721920] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.703 [2024-07-15 08:33:11.721928] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.703 [2024-07-15 08:33:11.721937] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.703 [2024-07-15 08:33:11.721953] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.703 [2024-07-15 08:33:11.721961] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.703 [2024-07-15 08:33:11.721969] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.703 [2024-07-15 08:33:11.721978] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.703 [2024-07-15 08:33:11.721988] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.703 [2024-07-15 08:33:11.721997] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.703 [2024-07-15 08:33:11.722005] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.703 [2024-07-15 08:33:11.722013] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.703 [2024-07-15 08:33:11.722022] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.703 [2024-07-15 08:33:11.722030] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.703 [2024-07-15 08:33:11.722038] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.703 [2024-07-15 08:33:11.722047] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.703 [2024-07-15 08:33:11.722055] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.703 [2024-07-15 08:33:11.722064] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.703 [2024-07-15 08:33:11.722072] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.703 [2024-07-15 08:33:11.722080] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.703 [2024-07-15 08:33:11.722088] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.703 [2024-07-15 08:33:11.722096] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.703 [2024-07-15 08:33:11.722104] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.703 [2024-07-15 08:33:11.722113] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.703 [2024-07-15 08:33:11.722120] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.703 [2024-07-15 08:33:11.722128] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.703 [2024-07-15 08:33:11.722137] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.703 [2024-07-15 08:33:11.722146] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.703 [2024-07-15 08:33:11.722154] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.703 [2024-07-15 08:33:11.722162] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.703 [2024-07-15 08:33:11.722171] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.703 [2024-07-15 08:33:11.722180] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.703 [2024-07-15 08:33:11.722188] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.703 [2024-07-15 08:33:11.722197] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94b80 is same with the state(5) to be set 00:20:19.703 [2024-07-15 08:33:11.722268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:129792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.703 [2024-07-15 08:33:11.722309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.703 [2024-07-15 08:33:11.722334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:61680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.703 [2024-07-15 08:33:11.722345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.703 [2024-07-15 08:33:11.722357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:77120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.703 [2024-07-15 08:33:11.722366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.703 [2024-07-15 08:33:11.722378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:40600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.703 [2024-07-15 08:33:11.722388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.703 [2024-07-15 08:33:11.722400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:127352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.703 [2024-07-15 08:33:11.722409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.703 [2024-07-15 08:33:11.722423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:67152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.703 [2024-07-15 08:33:11.722433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.703 [2024-07-15 08:33:11.722444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:13664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.703 [2024-07-15 08:33:11.722453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.703 [2024-07-15 08:33:11.722465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:70816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.703 [2024-07-15 08:33:11.722475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.703 [2024-07-15 08:33:11.722486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:3440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.704 [2024-07-15 08:33:11.722495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.704 [2024-07-15 08:33:11.722507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:75304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.704 [2024-07-15 08:33:11.722516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.704 [2024-07-15 08:33:11.722527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:89464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.704 [2024-07-15 08:33:11.722536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.704 [2024-07-15 08:33:11.722548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:19464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.704 [2024-07-15 08:33:11.722557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.704 [2024-07-15 08:33:11.722568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:129328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.704 [2024-07-15 08:33:11.722578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.704 [2024-07-15 08:33:11.722589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:34960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.704 [2024-07-15 08:33:11.722598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.704 [2024-07-15 08:33:11.722609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:64648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.704 [2024-07-15 08:33:11.722619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.704 [2024-07-15 08:33:11.722631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:17312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.704 [2024-07-15 08:33:11.722640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.704 [2024-07-15 08:33:11.722652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:40056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.704 [2024-07-15 08:33:11.722663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.704 [2024-07-15 08:33:11.722674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:112424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.704 [2024-07-15 08:33:11.722685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.704 [2024-07-15 08:33:11.722696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:102168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.704 [2024-07-15 08:33:11.722706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.704 [2024-07-15 08:33:11.722730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:54216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.704 [2024-07-15 08:33:11.722742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.704 [2024-07-15 08:33:11.722754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:93328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.704 [2024-07-15 08:33:11.722763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.704 [2024-07-15 08:33:11.722775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:105760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.704 [2024-07-15 08:33:11.722784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.704 [2024-07-15 08:33:11.722796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:75568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.704 [2024-07-15 08:33:11.722805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.704 [2024-07-15 08:33:11.722816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:122488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.704 [2024-07-15 08:33:11.722825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.704 [2024-07-15 08:33:11.722837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:121912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.704 [2024-07-15 08:33:11.722846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.704 [2024-07-15 08:33:11.722857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:63256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.704 [2024-07-15 08:33:11.722866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.704 [2024-07-15 08:33:11.722877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:130112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.704 [2024-07-15 08:33:11.722894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.704 [2024-07-15 08:33:11.722904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:57088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.704 [2024-07-15 08:33:11.722914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.704 [2024-07-15 08:33:11.722924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:60344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.704 [2024-07-15 08:33:11.722934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.704 [2024-07-15 08:33:11.722945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:49968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.704 [2024-07-15 08:33:11.722954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.704 [2024-07-15 08:33:11.722965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:64352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.704 [2024-07-15 08:33:11.722974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.704 [2024-07-15 08:33:11.722985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:129616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.704 [2024-07-15 08:33:11.722995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.704 [2024-07-15 08:33:11.723006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:14544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.704 [2024-07-15 08:33:11.723022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.704 [2024-07-15 08:33:11.723034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:110864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.704 [2024-07-15 08:33:11.723043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.704 [2024-07-15 08:33:11.723054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:104200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.704 [2024-07-15 08:33:11.723064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.704 [2024-07-15 08:33:11.723075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:52936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.704 [2024-07-15 08:33:11.723087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.704 [2024-07-15 08:33:11.723099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:77512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.704 [2024-07-15 08:33:11.723108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.704 [2024-07-15 08:33:11.723119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:29160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.704 [2024-07-15 08:33:11.723128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.704 [2024-07-15 08:33:11.723139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:30528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.704 [2024-07-15 08:33:11.723148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.704 [2024-07-15 08:33:11.723160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:83024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.704 [2024-07-15 08:33:11.723169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.704 [2024-07-15 08:33:11.723180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:30304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.704 [2024-07-15 08:33:11.723189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.704 [2024-07-15 08:33:11.723201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:2112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.704 [2024-07-15 08:33:11.723209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.704 [2024-07-15 08:33:11.723221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:10632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.704 [2024-07-15 08:33:11.723230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.704 [2024-07-15 08:33:11.723255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:72784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.704 [2024-07-15 08:33:11.723265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.704 [2024-07-15 08:33:11.723276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:1512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.704 [2024-07-15 08:33:11.723285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.704 [2024-07-15 08:33:11.723297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:79952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.704 [2024-07-15 08:33:11.723306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.704 [2024-07-15 08:33:11.723317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:61680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.704 [2024-07-15 08:33:11.723326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.704 [2024-07-15 08:33:11.723338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:24992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.704 [2024-07-15 08:33:11.723347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.704 [2024-07-15 08:33:11.723359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:53520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.704 [2024-07-15 08:33:11.723373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.704 [2024-07-15 08:33:11.723385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:63408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.704 [2024-07-15 08:33:11.723394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.704 [2024-07-15 08:33:11.723406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:54776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.704 [2024-07-15 08:33:11.723415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.705 [2024-07-15 08:33:11.723426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:30888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.705 [2024-07-15 08:33:11.723435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.705 [2024-07-15 08:33:11.723446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:39208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.705 [2024-07-15 08:33:11.723455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.705 [2024-07-15 08:33:11.723466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:97688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.705 [2024-07-15 08:33:11.723475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.705 [2024-07-15 08:33:11.723497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:30360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.705 [2024-07-15 08:33:11.723507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.705 [2024-07-15 08:33:11.723518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:102224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.705 [2024-07-15 08:33:11.723528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.705 [2024-07-15 08:33:11.723540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:7416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.705 [2024-07-15 08:33:11.723549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.705 [2024-07-15 08:33:11.723560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:21144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.705 [2024-07-15 08:33:11.723570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.705 [2024-07-15 08:33:11.723582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:120416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.705 [2024-07-15 08:33:11.723591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.705 [2024-07-15 08:33:11.723602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:93424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.705 [2024-07-15 08:33:11.723611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.705 [2024-07-15 08:33:11.723623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:67768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.705 [2024-07-15 08:33:11.723632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.705 [2024-07-15 08:33:11.723643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.705 [2024-07-15 08:33:11.723652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.705 [2024-07-15 08:33:11.723663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:93768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.705 [2024-07-15 08:33:11.723672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.705 [2024-07-15 08:33:11.723684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:98952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.705 [2024-07-15 08:33:11.723693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.705 [2024-07-15 08:33:11.723704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:34360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.705 [2024-07-15 08:33:11.723726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.705 [2024-07-15 08:33:11.723740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:82960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.705 [2024-07-15 08:33:11.723750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.705 [2024-07-15 08:33:11.723761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:113896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.705 [2024-07-15 08:33:11.723770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.705 [2024-07-15 08:33:11.723782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:129632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.705 [2024-07-15 08:33:11.723791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.705 [2024-07-15 08:33:11.723802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:126128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.705 [2024-07-15 08:33:11.723812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.705 [2024-07-15 08:33:11.723823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:63816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.705 [2024-07-15 08:33:11.723832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.705 [2024-07-15 08:33:11.723843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:54840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.705 [2024-07-15 08:33:11.723853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.705 [2024-07-15 08:33:11.723864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:43096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.705 [2024-07-15 08:33:11.723873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.705 [2024-07-15 08:33:11.723885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:37104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.705 [2024-07-15 08:33:11.723894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.705 [2024-07-15 08:33:11.723905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:118576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.705 [2024-07-15 08:33:11.723914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.705 [2024-07-15 08:33:11.723925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:70576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.705 [2024-07-15 08:33:11.723934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.705 [2024-07-15 08:33:11.723945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:63992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.705 [2024-07-15 08:33:11.723954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.705 [2024-07-15 08:33:11.723965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:128304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.705 [2024-07-15 08:33:11.723974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.705 [2024-07-15 08:33:11.723986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:46568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.705 [2024-07-15 08:33:11.723995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.705 [2024-07-15 08:33:11.724006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:50288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.705 [2024-07-15 08:33:11.724015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.705 [2024-07-15 08:33:11.724027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:83456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.705 [2024-07-15 08:33:11.724036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.705 [2024-07-15 08:33:11.724047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:72000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.705 [2024-07-15 08:33:11.724061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.705 [2024-07-15 08:33:11.724073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:53024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.705 [2024-07-15 08:33:11.724082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.705 [2024-07-15 08:33:11.724093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:7400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.705 [2024-07-15 08:33:11.724102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.705 [2024-07-15 08:33:11.724114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:72928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.705 [2024-07-15 08:33:11.724122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.705 [2024-07-15 08:33:11.724134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:84576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.705 [2024-07-15 08:33:11.724143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.705 [2024-07-15 08:33:11.724154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:24048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.705 [2024-07-15 08:33:11.724163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.705 [2024-07-15 08:33:11.724174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:108312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.705 [2024-07-15 08:33:11.724184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.705 [2024-07-15 08:33:11.724195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:22656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.705 [2024-07-15 08:33:11.724204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.705 [2024-07-15 08:33:11.724215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.705 [2024-07-15 08:33:11.724224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.705 [2024-07-15 08:33:11.724235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:30456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.705 [2024-07-15 08:33:11.724244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.705 [2024-07-15 08:33:11.724255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:111256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.705 [2024-07-15 08:33:11.724264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.705 [2024-07-15 08:33:11.724275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:35912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.705 [2024-07-15 08:33:11.724284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.705 [2024-07-15 08:33:11.724295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:52304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.705 [2024-07-15 08:33:11.724304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.705 [2024-07-15 08:33:11.724315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:127080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.706 [2024-07-15 08:33:11.724325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.706 [2024-07-15 08:33:11.724336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:9304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.706 [2024-07-15 08:33:11.724350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.706 [2024-07-15 08:33:11.724362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:128272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.706 [2024-07-15 08:33:11.724371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.706 [2024-07-15 08:33:11.724382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:43832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.706 [2024-07-15 08:33:11.724396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.706 [2024-07-15 08:33:11.724408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:111072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.706 [2024-07-15 08:33:11.724417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.706 [2024-07-15 08:33:11.724428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:124648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.706 [2024-07-15 08:33:11.724437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.706 [2024-07-15 08:33:11.724448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:2368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.706 [2024-07-15 08:33:11.724457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.706 [2024-07-15 08:33:11.724468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:54064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.706 [2024-07-15 08:33:11.724477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.706 [2024-07-15 08:33:11.724488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:9352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.706 [2024-07-15 08:33:11.724497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.706 [2024-07-15 08:33:11.724508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:72256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.706 [2024-07-15 08:33:11.724517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.706 [2024-07-15 08:33:11.724529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:43200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.706 [2024-07-15 08:33:11.724538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.706 [2024-07-15 08:33:11.724549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:50448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.706 [2024-07-15 08:33:11.724557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.706 [2024-07-15 08:33:11.724569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:25336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.706 [2024-07-15 08:33:11.724578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.706 [2024-07-15 08:33:11.724589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:107512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.706 [2024-07-15 08:33:11.724598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.706 [2024-07-15 08:33:11.724609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:118000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.706 [2024-07-15 08:33:11.724618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.706 [2024-07-15 08:33:11.724629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:110360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.706 [2024-07-15 08:33:11.724638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.706 [2024-07-15 08:33:11.724649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:84320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.706 [2024-07-15 08:33:11.724659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.706 [2024-07-15 08:33:11.724670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:16136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.706 [2024-07-15 08:33:11.724684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.706 [2024-07-15 08:33:11.724695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:80048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.706 [2024-07-15 08:33:11.724705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.706 [2024-07-15 08:33:11.724716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:72976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.706 [2024-07-15 08:33:11.724740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.706 [2024-07-15 08:33:11.724752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:107872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.706 [2024-07-15 08:33:11.724762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.706 [2024-07-15 08:33:11.724773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:76776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.706 [2024-07-15 08:33:11.724782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.706 [2024-07-15 08:33:11.724793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:26736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.706 [2024-07-15 08:33:11.724802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.706 [2024-07-15 08:33:11.724813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.706 [2024-07-15 08:33:11.724822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.706 [2024-07-15 08:33:11.724833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.706 [2024-07-15 08:33:11.724843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.706 [2024-07-15 08:33:11.724854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:41136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.706 [2024-07-15 08:33:11.724863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.706 [2024-07-15 08:33:11.724874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:123400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.706 [2024-07-15 08:33:11.724883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.706 [2024-07-15 08:33:11.724898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.706 [2024-07-15 08:33:11.724907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.706 [2024-07-15 08:33:11.724918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:101416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.706 [2024-07-15 08:33:11.724927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.706 [2024-07-15 08:33:11.724938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:80096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.706 [2024-07-15 08:33:11.724947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.706 [2024-07-15 08:33:11.724958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:42216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.706 [2024-07-15 08:33:11.724968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.706 [2024-07-15 08:33:11.724979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:32392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.706 [2024-07-15 08:33:11.724988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.706 [2024-07-15 08:33:11.724999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:55752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.706 [2024-07-15 08:33:11.725008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.706 [2024-07-15 08:33:11.725019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:50544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.706 [2024-07-15 08:33:11.725033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.707 [2024-07-15 08:33:11.725043] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834310 is same with the state(5) to be set 00:20:19.707 [2024-07-15 08:33:11.725056] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:19.707 [2024-07-15 08:33:11.725064] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:19.707 [2024-07-15 08:33:11.725076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72176 len:8 PRP1 0x0 PRP2 0x0 00:20:19.707 [2024-07-15 08:33:11.725085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.707 [2024-07-15 08:33:11.725158] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1834310 was disconnected and freed. reset controller. 00:20:19.707 [2024-07-15 08:33:11.725429] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:19.707 [2024-07-15 08:33:11.725530] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5c00 (9): Bad file descriptor 00:20:19.707 [2024-07-15 08:33:11.725665] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.707 [2024-07-15 08:33:11.725691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5c00 with addr=10.0.0.2, port=4420 00:20:19.707 [2024-07-15 08:33:11.725703] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5c00 is same with the state(5) to be set 00:20:19.707 [2024-07-15 08:33:11.725742] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5c00 (9): Bad file descriptor 00:20:19.707 [2024-07-15 08:33:11.725762] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:19.707 [2024-07-15 08:33:11.725772] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:19.707 [2024-07-15 08:33:11.725782] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:19.707 [2024-07-15 08:33:11.725803] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.707 [2024-07-15 08:33:11.725814] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:19.707 08:33:11 nvmf_tcp.nvmf_timeout -- host/timeout.sh@128 -- # wait 82754 00:20:21.608 [2024-07-15 08:33:13.726050] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.608 [2024-07-15 08:33:13.726159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5c00 with addr=10.0.0.2, port=4420 00:20:21.608 [2024-07-15 08:33:13.726176] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5c00 is same with the state(5) to be set 00:20:21.608 [2024-07-15 08:33:13.726204] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5c00 (9): Bad file descriptor 00:20:21.608 [2024-07-15 08:33:13.726223] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.608 [2024-07-15 08:33:13.726233] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.608 [2024-07-15 08:33:13.726245] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.608 [2024-07-15 08:33:13.726274] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.608 [2024-07-15 08:33:13.726286] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:24.158 [2024-07-15 08:33:15.726528] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:24.158 [2024-07-15 08:33:15.726605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5c00 with addr=10.0.0.2, port=4420 00:20:24.158 [2024-07-15 08:33:15.726625] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5c00 is same with the state(5) to be set 00:20:24.158 [2024-07-15 08:33:15.726668] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5c00 (9): Bad file descriptor 00:20:24.158 [2024-07-15 08:33:15.726688] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:24.158 [2024-07-15 08:33:15.726698] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:24.158 [2024-07-15 08:33:15.726709] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:24.158 [2024-07-15 08:33:15.726748] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:24.158 [2024-07-15 08:33:15.726762] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:26.062 [2024-07-15 08:33:17.726845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:26.062 [2024-07-15 08:33:17.726915] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:26.062 [2024-07-15 08:33:17.726928] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:26.062 [2024-07-15 08:33:17.726938] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:20:26.062 [2024-07-15 08:33:17.726965] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:26.629 00:20:26.629 Latency(us) 00:20:26.629 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:26.629 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:20:26.629 NVMe0n1 : 8.19 2064.10 8.06 15.64 0.00 61437.75 1645.85 7015926.69 00:20:26.629 =================================================================================================================== 00:20:26.629 Total : 2064.10 8.06 15.64 0.00 61437.75 1645.85 7015926.69 00:20:26.629 0 00:20:26.629 08:33:18 nvmf_tcp.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:26.629 Attaching 5 probes... 00:20:26.629 1228.935331: reset bdev controller NVMe0 00:20:26.629 1229.086221: reconnect bdev controller NVMe0 00:20:26.629 3229.402444: reconnect delay bdev controller NVMe0 00:20:26.629 3229.431386: reconnect bdev controller NVMe0 00:20:26.629 5229.856406: reconnect delay bdev controller NVMe0 00:20:26.629 5229.898987: reconnect bdev controller NVMe0 00:20:26.629 7230.327530: reconnect delay bdev controller NVMe0 00:20:26.629 7230.351710: reconnect bdev controller NVMe0 00:20:26.629 08:33:18 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:20:26.629 08:33:18 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:20:26.629 08:33:18 nvmf_tcp.nvmf_timeout -- host/timeout.sh@136 -- # kill 82708 00:20:26.629 08:33:18 nvmf_tcp.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:26.629 08:33:18 nvmf_tcp.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 82692 00:20:26.629 08:33:18 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 82692 ']' 00:20:26.629 08:33:18 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 82692 00:20:26.629 08:33:18 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:20:26.629 08:33:18 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:26.629 08:33:18 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82692 00:20:26.629 08:33:18 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:26.629 killing process with pid 82692 00:20:26.629 Received shutdown signal, test time was about 8.242082 seconds 00:20:26.629 00:20:26.629 Latency(us) 00:20:26.629 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:26.629 =================================================================================================================== 00:20:26.629 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:26.629 08:33:18 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:26.629 08:33:18 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82692' 00:20:26.629 08:33:18 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 82692 00:20:26.629 08:33:18 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 82692 00:20:26.888 08:33:18 nvmf_tcp.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:27.145 08:33:19 nvmf_tcp.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:20:27.145 08:33:19 nvmf_tcp.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:20:27.145 08:33:19 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:27.145 08:33:19 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@117 -- # sync 00:20:27.145 08:33:19 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:27.145 08:33:19 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@120 -- # set +e 00:20:27.145 08:33:19 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:27.146 08:33:19 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:27.146 rmmod nvme_tcp 00:20:27.146 rmmod nvme_fabrics 00:20:27.146 rmmod nvme_keyring 00:20:27.404 08:33:19 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:27.404 08:33:19 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@124 -- # set -e 00:20:27.404 08:33:19 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@125 -- # return 0 00:20:27.404 08:33:19 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@489 -- # '[' -n 82260 ']' 00:20:27.404 08:33:19 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@490 -- # killprocess 82260 00:20:27.404 08:33:19 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 82260 ']' 00:20:27.404 08:33:19 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 82260 00:20:27.404 08:33:19 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:20:27.404 08:33:19 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:27.404 08:33:19 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82260 00:20:27.404 killing process with pid 82260 00:20:27.404 08:33:19 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:27.404 08:33:19 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:27.404 08:33:19 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82260' 00:20:27.404 08:33:19 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 82260 00:20:27.404 08:33:19 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 82260 00:20:27.662 08:33:19 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:27.662 08:33:19 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:27.662 08:33:19 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:27.662 08:33:19 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:27.662 08:33:19 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:27.662 08:33:19 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:27.662 08:33:19 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:27.662 08:33:19 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:27.662 08:33:19 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:27.662 00:20:27.662 real 0m47.866s 00:20:27.662 user 2m21.104s 00:20:27.662 sys 0m5.747s 00:20:27.662 08:33:19 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:27.662 08:33:19 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:27.662 ************************************ 00:20:27.662 END TEST nvmf_timeout 00:20:27.662 ************************************ 00:20:27.662 08:33:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:27.662 08:33:19 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ virt == phy ]] 00:20:27.662 08:33:19 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:20:27.662 08:33:19 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:27.662 08:33:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:27.662 08:33:19 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:20:27.662 ************************************ 00:20:27.662 END TEST nvmf_tcp 00:20:27.662 ************************************ 00:20:27.662 00:20:27.662 real 12m22.038s 00:20:27.662 user 30m11.640s 00:20:27.662 sys 3m2.719s 00:20:27.662 08:33:19 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:27.662 08:33:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:27.662 08:33:19 -- common/autotest_common.sh@1142 -- # return 0 00:20:27.662 08:33:19 -- spdk/autotest.sh@288 -- # [[ 1 -eq 0 ]] 00:20:27.662 08:33:19 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:20:27.662 08:33:19 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:27.662 08:33:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:27.663 08:33:19 -- common/autotest_common.sh@10 -- # set +x 00:20:27.663 ************************************ 00:20:27.663 START TEST nvmf_dif 00:20:27.663 ************************************ 00:20:27.663 08:33:19 nvmf_dif -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:20:27.921 * Looking for test storage... 00:20:27.921 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:27.921 08:33:19 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:27.921 08:33:19 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:20:27.921 08:33:19 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:27.921 08:33:19 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:27.921 08:33:19 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:27.921 08:33:19 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:27.921 08:33:19 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:27.921 08:33:19 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:27.921 08:33:19 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:27.921 08:33:19 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:27.921 08:33:19 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:27.921 08:33:19 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:27.921 08:33:19 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:20:27.921 08:33:19 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:20:27.921 08:33:19 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:27.921 08:33:19 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:27.921 08:33:19 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:27.921 08:33:19 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:27.921 08:33:19 nvmf_dif -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:27.922 08:33:19 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:27.922 08:33:19 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:27.922 08:33:19 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:27.922 08:33:19 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.922 08:33:19 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.922 08:33:19 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.922 08:33:19 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:20:27.922 08:33:19 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.922 08:33:19 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:20:27.922 08:33:19 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:27.922 08:33:19 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:27.922 08:33:19 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:27.922 08:33:19 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:27.922 08:33:19 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:27.922 08:33:19 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:27.922 08:33:19 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:27.922 08:33:19 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:27.922 08:33:19 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:20:27.922 08:33:19 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:20:27.922 08:33:19 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:20:27.922 08:33:19 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:20:27.922 08:33:19 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:20:27.922 08:33:19 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:27.922 08:33:19 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:27.922 08:33:19 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:27.922 08:33:19 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:27.922 08:33:19 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:27.922 08:33:19 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:27.922 08:33:19 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:27.922 08:33:19 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:27.922 08:33:19 nvmf_dif -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:27.922 08:33:19 nvmf_dif -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:27.922 08:33:19 nvmf_dif -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:27.922 08:33:19 nvmf_dif -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:27.922 08:33:19 nvmf_dif -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:27.922 08:33:19 nvmf_dif -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:27.922 08:33:19 nvmf_dif -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:27.922 08:33:19 nvmf_dif -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:27.922 08:33:19 nvmf_dif -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:27.922 08:33:19 nvmf_dif -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:27.922 08:33:19 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:27.922 08:33:19 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:27.922 08:33:19 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:27.922 08:33:19 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:27.922 08:33:19 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:27.922 08:33:19 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:27.922 08:33:19 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:27.922 08:33:19 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:27.922 08:33:19 nvmf_dif -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:27.922 08:33:19 nvmf_dif -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:27.922 Cannot find device "nvmf_tgt_br" 00:20:27.922 08:33:19 nvmf_dif -- nvmf/common.sh@155 -- # true 00:20:27.922 08:33:19 nvmf_dif -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:27.922 Cannot find device "nvmf_tgt_br2" 00:20:27.922 08:33:19 nvmf_dif -- nvmf/common.sh@156 -- # true 00:20:27.922 08:33:19 nvmf_dif -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:27.922 08:33:19 nvmf_dif -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:27.922 Cannot find device "nvmf_tgt_br" 00:20:27.922 08:33:20 nvmf_dif -- nvmf/common.sh@158 -- # true 00:20:27.922 08:33:20 nvmf_dif -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:27.922 Cannot find device "nvmf_tgt_br2" 00:20:27.922 08:33:20 nvmf_dif -- nvmf/common.sh@159 -- # true 00:20:27.922 08:33:20 nvmf_dif -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:27.922 08:33:20 nvmf_dif -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:27.922 08:33:20 nvmf_dif -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:27.922 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:27.922 08:33:20 nvmf_dif -- nvmf/common.sh@162 -- # true 00:20:27.922 08:33:20 nvmf_dif -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:27.922 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:27.922 08:33:20 nvmf_dif -- nvmf/common.sh@163 -- # true 00:20:27.922 08:33:20 nvmf_dif -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:27.922 08:33:20 nvmf_dif -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:27.922 08:33:20 nvmf_dif -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:28.181 08:33:20 nvmf_dif -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:28.181 08:33:20 nvmf_dif -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:28.181 08:33:20 nvmf_dif -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:28.181 08:33:20 nvmf_dif -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:28.181 08:33:20 nvmf_dif -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:28.181 08:33:20 nvmf_dif -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:28.181 08:33:20 nvmf_dif -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:28.181 08:33:20 nvmf_dif -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:28.181 08:33:20 nvmf_dif -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:28.181 08:33:20 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:28.181 08:33:20 nvmf_dif -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:28.181 08:33:20 nvmf_dif -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:28.181 08:33:20 nvmf_dif -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:28.181 08:33:20 nvmf_dif -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:28.181 08:33:20 nvmf_dif -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:28.181 08:33:20 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:28.181 08:33:20 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:28.181 08:33:20 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:28.181 08:33:20 nvmf_dif -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:28.181 08:33:20 nvmf_dif -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:28.181 08:33:20 nvmf_dif -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:28.181 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:28.181 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.098 ms 00:20:28.181 00:20:28.181 --- 10.0.0.2 ping statistics --- 00:20:28.181 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:28.181 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:20:28.181 08:33:20 nvmf_dif -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:28.182 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:28.182 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:20:28.182 00:20:28.182 --- 10.0.0.3 ping statistics --- 00:20:28.182 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:28.182 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:20:28.182 08:33:20 nvmf_dif -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:28.182 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:28.182 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:20:28.182 00:20:28.182 --- 10.0.0.1 ping statistics --- 00:20:28.182 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:28.182 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:20:28.182 08:33:20 nvmf_dif -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:28.182 08:33:20 nvmf_dif -- nvmf/common.sh@433 -- # return 0 00:20:28.182 08:33:20 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:20:28.182 08:33:20 nvmf_dif -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:28.440 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:28.440 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:28.440 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:28.699 08:33:20 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:28.699 08:33:20 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:28.699 08:33:20 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:28.699 08:33:20 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:28.699 08:33:20 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:28.699 08:33:20 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:28.699 08:33:20 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:20:28.699 08:33:20 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:20:28.699 08:33:20 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:28.699 08:33:20 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:28.699 08:33:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:28.699 08:33:20 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=83190 00:20:28.699 08:33:20 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 83190 00:20:28.699 08:33:20 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 83190 ']' 00:20:28.699 08:33:20 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:28.699 08:33:20 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:28.699 08:33:20 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:28.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:28.699 08:33:20 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:28.699 08:33:20 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:28.699 08:33:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:28.699 [2024-07-15 08:33:20.731637] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:20:28.699 [2024-07-15 08:33:20.731756] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:28.699 [2024-07-15 08:33:20.874173] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:28.958 [2024-07-15 08:33:20.998016] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:28.958 [2024-07-15 08:33:20.998083] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:28.958 [2024-07-15 08:33:20.998098] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:28.958 [2024-07-15 08:33:20.998108] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:28.958 [2024-07-15 08:33:20.998118] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:28.958 [2024-07-15 08:33:20.998151] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:28.958 [2024-07-15 08:33:21.056237] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:29.896 08:33:21 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:29.896 08:33:21 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:20:29.896 08:33:21 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:29.896 08:33:21 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:29.896 08:33:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:29.896 08:33:21 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:29.896 08:33:21 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:20:29.896 08:33:21 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:20:29.896 08:33:21 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.896 08:33:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:29.896 [2024-07-15 08:33:21.780649] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:29.896 08:33:21 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.896 08:33:21 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:20:29.896 08:33:21 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:29.896 08:33:21 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:29.896 08:33:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:29.896 ************************************ 00:20:29.896 START TEST fio_dif_1_default 00:20:29.896 ************************************ 00:20:29.896 08:33:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:20:29.896 08:33:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:20:29.896 08:33:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:20:29.896 08:33:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:20:29.896 08:33:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:20:29.896 08:33:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:20:29.896 08:33:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:20:29.896 08:33:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.896 08:33:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:29.896 bdev_null0 00:20:29.896 08:33:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.896 08:33:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:29.896 08:33:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.896 08:33:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:29.896 08:33:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.896 08:33:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:29.896 08:33:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.896 08:33:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:29.896 08:33:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.896 08:33:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:29.896 08:33:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.896 08:33:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:29.896 [2024-07-15 08:33:21.824763] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:29.896 08:33:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.896 08:33:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:20:29.896 08:33:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:20:29.896 08:33:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:20:29.896 08:33:21 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:20:29.896 08:33:21 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:20:29.896 08:33:21 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:29.896 08:33:21 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:29.896 { 00:20:29.896 "params": { 00:20:29.896 "name": "Nvme$subsystem", 00:20:29.896 "trtype": "$TEST_TRANSPORT", 00:20:29.896 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:29.896 "adrfam": "ipv4", 00:20:29.896 "trsvcid": "$NVMF_PORT", 00:20:29.896 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:29.896 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:29.896 "hdgst": ${hdgst:-false}, 00:20:29.896 "ddgst": ${ddgst:-false} 00:20:29.896 }, 00:20:29.896 "method": "bdev_nvme_attach_controller" 00:20:29.896 } 00:20:29.896 EOF 00:20:29.896 )") 00:20:29.896 08:33:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:29.896 08:33:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:20:29.896 08:33:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:29.896 08:33:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:20:29.896 08:33:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:29.896 08:33:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:20:29.896 08:33:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:29.896 08:33:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:29.896 08:33:21 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:20:29.896 08:33:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:29.896 08:33:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:20:29.896 08:33:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:29.896 08:33:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:29.896 08:33:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:20:29.896 08:33:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:20:29.896 08:33:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:29.896 08:33:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:29.896 08:33:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:20:29.896 08:33:21 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:20:29.896 08:33:21 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:20:29.896 08:33:21 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:29.896 "params": { 00:20:29.896 "name": "Nvme0", 00:20:29.896 "trtype": "tcp", 00:20:29.896 "traddr": "10.0.0.2", 00:20:29.896 "adrfam": "ipv4", 00:20:29.896 "trsvcid": "4420", 00:20:29.896 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:29.896 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:29.896 "hdgst": false, 00:20:29.896 "ddgst": false 00:20:29.896 }, 00:20:29.896 "method": "bdev_nvme_attach_controller" 00:20:29.896 }' 00:20:29.896 08:33:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:29.896 08:33:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:29.896 08:33:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:29.896 08:33:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:29.896 08:33:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:29.896 08:33:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:29.896 08:33:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:29.896 08:33:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:29.896 08:33:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:29.896 08:33:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:30.155 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:20:30.155 fio-3.35 00:20:30.155 Starting 1 thread 00:20:42.358 00:20:42.358 filename0: (groupid=0, jobs=1): err= 0: pid=83257: Mon Jul 15 08:33:32 2024 00:20:42.358 read: IOPS=8513, BW=33.3MiB/s (34.9MB/s)(333MiB/10001msec) 00:20:42.358 slat (nsec): min=6401, max=51601, avg=8592.64, stdev=2913.60 00:20:42.358 clat (usec): min=220, max=3206, avg=444.77, stdev=51.29 00:20:42.358 lat (usec): min=227, max=3239, avg=453.36, stdev=51.66 00:20:42.358 clat percentiles (usec): 00:20:42.358 | 1.00th=[ 371], 5.00th=[ 396], 10.00th=[ 408], 20.00th=[ 416], 00:20:42.358 | 30.00th=[ 424], 40.00th=[ 433], 50.00th=[ 437], 60.00th=[ 445], 00:20:42.358 | 70.00th=[ 453], 80.00th=[ 461], 90.00th=[ 486], 95.00th=[ 529], 00:20:42.358 | 99.00th=[ 603], 99.50th=[ 619], 99.90th=[ 709], 99.95th=[ 938], 00:20:42.358 | 99.99th=[ 1795] 00:20:42.358 bw ( KiB/s): min=32896, max=35264, per=100.00%, avg=34231.58, stdev=632.92, samples=19 00:20:42.358 iops : min= 8224, max= 8816, avg=8557.89, stdev=158.23, samples=19 00:20:42.358 lat (usec) : 250=0.01%, 500=92.18%, 750=7.73%, 1000=0.05% 00:20:42.358 lat (msec) : 2=0.03%, 4=0.01% 00:20:42.358 cpu : usr=83.26%, sys=14.94%, ctx=15, majf=0, minf=0 00:20:42.358 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:42.358 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.358 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.359 issued rwts: total=85146,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:42.359 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:42.359 00:20:42.359 Run status group 0 (all jobs): 00:20:42.359 READ: bw=33.3MiB/s (34.9MB/s), 33.3MiB/s-33.3MiB/s (34.9MB/s-34.9MB/s), io=333MiB (349MB), run=10001-10001msec 00:20:42.359 08:33:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:20:42.359 08:33:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:20:42.359 08:33:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:20:42.359 08:33:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:42.359 08:33:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:20:42.359 08:33:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:42.359 08:33:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.359 08:33:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:42.359 08:33:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.359 08:33:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:42.359 08:33:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.359 08:33:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:42.359 08:33:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.359 00:20:42.359 real 0m11.044s 00:20:42.359 user 0m9.001s 00:20:42.359 sys 0m1.775s 00:20:42.359 08:33:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:42.359 08:33:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:42.359 ************************************ 00:20:42.359 END TEST fio_dif_1_default 00:20:42.359 ************************************ 00:20:42.359 08:33:32 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:20:42.359 08:33:32 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:20:42.359 08:33:32 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:42.359 08:33:32 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:42.359 08:33:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:42.359 ************************************ 00:20:42.359 START TEST fio_dif_1_multi_subsystems 00:20:42.359 ************************************ 00:20:42.359 08:33:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:20:42.359 08:33:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:20:42.359 08:33:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:20:42.359 08:33:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:20:42.359 08:33:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:20:42.359 08:33:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:20:42.359 08:33:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:20:42.359 08:33:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:20:42.359 08:33:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.359 08:33:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:42.359 bdev_null0 00:20:42.359 08:33:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.359 08:33:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:42.359 08:33:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.359 08:33:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:42.359 08:33:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.359 08:33:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:42.359 08:33:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.359 08:33:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:42.359 08:33:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.359 08:33:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:42.359 08:33:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.359 08:33:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:42.359 [2024-07-15 08:33:32.926200] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:42.359 08:33:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.359 08:33:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:20:42.359 08:33:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:20:42.359 08:33:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:20:42.359 08:33:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:20:42.359 08:33:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.359 08:33:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:42.359 bdev_null1 00:20:42.359 08:33:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.359 08:33:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:42.359 08:33:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.359 08:33:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:42.359 08:33:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.359 08:33:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:42.359 08:33:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.359 08:33:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:42.359 08:33:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.359 08:33:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:42.359 08:33:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.359 08:33:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:42.359 08:33:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.359 08:33:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:20:42.359 08:33:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:20:42.359 08:33:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:20:42.359 08:33:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:20:42.359 08:33:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:20:42.359 08:33:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:42.359 08:33:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:42.359 08:33:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:20:42.359 08:33:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:42.359 { 00:20:42.359 "params": { 00:20:42.359 "name": "Nvme$subsystem", 00:20:42.359 "trtype": "$TEST_TRANSPORT", 00:20:42.359 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:42.359 "adrfam": "ipv4", 00:20:42.359 "trsvcid": "$NVMF_PORT", 00:20:42.359 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:42.359 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:42.359 "hdgst": ${hdgst:-false}, 00:20:42.359 "ddgst": ${ddgst:-false} 00:20:42.359 }, 00:20:42.359 "method": "bdev_nvme_attach_controller" 00:20:42.359 } 00:20:42.359 EOF 00:20:42.359 )") 00:20:42.359 08:33:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:42.359 08:33:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:42.359 08:33:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:20:42.359 08:33:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:42.359 08:33:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:42.359 08:33:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:20:42.359 08:33:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:42.359 08:33:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:20:42.359 08:33:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:20:42.359 08:33:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:42.359 08:33:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:42.359 08:33:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:20:42.359 08:33:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:20:42.359 08:33:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:20:42.359 08:33:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:20:42.359 08:33:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:42.359 08:33:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:42.359 08:33:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:42.359 08:33:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:42.359 { 00:20:42.359 "params": { 00:20:42.359 "name": "Nvme$subsystem", 00:20:42.359 "trtype": "$TEST_TRANSPORT", 00:20:42.359 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:42.359 "adrfam": "ipv4", 00:20:42.359 "trsvcid": "$NVMF_PORT", 00:20:42.359 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:42.359 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:42.359 "hdgst": ${hdgst:-false}, 00:20:42.359 "ddgst": ${ddgst:-false} 00:20:42.359 }, 00:20:42.359 "method": "bdev_nvme_attach_controller" 00:20:42.359 } 00:20:42.359 EOF 00:20:42.359 )") 00:20:42.359 08:33:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:20:42.359 08:33:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:20:42.359 08:33:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:20:42.359 08:33:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:20:42.360 08:33:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:20:42.360 08:33:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:42.360 "params": { 00:20:42.360 "name": "Nvme0", 00:20:42.360 "trtype": "tcp", 00:20:42.360 "traddr": "10.0.0.2", 00:20:42.360 "adrfam": "ipv4", 00:20:42.360 "trsvcid": "4420", 00:20:42.360 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:42.360 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:42.360 "hdgst": false, 00:20:42.360 "ddgst": false 00:20:42.360 }, 00:20:42.360 "method": "bdev_nvme_attach_controller" 00:20:42.360 },{ 00:20:42.360 "params": { 00:20:42.360 "name": "Nvme1", 00:20:42.360 "trtype": "tcp", 00:20:42.360 "traddr": "10.0.0.2", 00:20:42.360 "adrfam": "ipv4", 00:20:42.360 "trsvcid": "4420", 00:20:42.360 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:42.360 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:42.360 "hdgst": false, 00:20:42.360 "ddgst": false 00:20:42.360 }, 00:20:42.360 "method": "bdev_nvme_attach_controller" 00:20:42.360 }' 00:20:42.360 08:33:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:42.360 08:33:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:42.360 08:33:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:42.360 08:33:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:42.360 08:33:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:42.360 08:33:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:42.360 08:33:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:42.360 08:33:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:42.360 08:33:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:42.360 08:33:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:42.360 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:20:42.360 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:20:42.360 fio-3.35 00:20:42.360 Starting 2 threads 00:20:52.379 00:20:52.379 filename0: (groupid=0, jobs=1): err= 0: pid=83416: Mon Jul 15 08:33:43 2024 00:20:52.379 read: IOPS=4703, BW=18.4MiB/s (19.3MB/s)(184MiB/10001msec) 00:20:52.379 slat (nsec): min=6957, max=69001, avg=13877.63, stdev=4100.64 00:20:52.379 clat (usec): min=438, max=5656, avg=812.31, stdev=81.31 00:20:52.379 lat (usec): min=445, max=5682, avg=826.19, stdev=81.68 00:20:52.379 clat percentiles (usec): 00:20:52.379 | 1.00th=[ 709], 5.00th=[ 742], 10.00th=[ 750], 20.00th=[ 775], 00:20:52.379 | 30.00th=[ 783], 40.00th=[ 799], 50.00th=[ 807], 60.00th=[ 816], 00:20:52.379 | 70.00th=[ 832], 80.00th=[ 840], 90.00th=[ 865], 95.00th=[ 889], 00:20:52.379 | 99.00th=[ 1004], 99.50th=[ 1090], 99.90th=[ 1516], 99.95th=[ 1762], 00:20:52.379 | 99.99th=[ 3261] 00:20:52.379 bw ( KiB/s): min=16256, max=19392, per=49.99%, avg=18806.11, stdev=766.76, samples=19 00:20:52.379 iops : min= 4064, max= 4848, avg=4701.53, stdev=191.69, samples=19 00:20:52.379 lat (usec) : 500=0.01%, 750=8.76%, 1000=90.24% 00:20:52.379 lat (msec) : 2=0.97%, 4=0.01%, 10=0.01% 00:20:52.379 cpu : usr=90.58%, sys=8.11%, ctx=24, majf=0, minf=0 00:20:52.379 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:52.379 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:52.379 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:52.379 issued rwts: total=47036,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:52.379 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:52.379 filename1: (groupid=0, jobs=1): err= 0: pid=83417: Mon Jul 15 08:33:43 2024 00:20:52.379 read: IOPS=4702, BW=18.4MiB/s (19.3MB/s)(184MiB/10001msec) 00:20:52.380 slat (usec): min=5, max=137, avg=13.57, stdev= 4.06 00:20:52.380 clat (usec): min=626, max=7109, avg=814.49, stdev=93.36 00:20:52.380 lat (usec): min=633, max=7154, avg=828.06, stdev=93.93 00:20:52.380 clat percentiles (usec): 00:20:52.380 | 1.00th=[ 693], 5.00th=[ 725], 10.00th=[ 742], 20.00th=[ 766], 00:20:52.380 | 30.00th=[ 783], 40.00th=[ 799], 50.00th=[ 816], 60.00th=[ 824], 00:20:52.380 | 70.00th=[ 832], 80.00th=[ 848], 90.00th=[ 873], 95.00th=[ 898], 00:20:52.380 | 99.00th=[ 1012], 99.50th=[ 1090], 99.90th=[ 1549], 99.95th=[ 1762], 00:20:52.380 | 99.99th=[ 3261] 00:20:52.380 bw ( KiB/s): min=16256, max=19392, per=49.98%, avg=18804.21, stdev=768.68, samples=19 00:20:52.380 iops : min= 4064, max= 4848, avg=4701.05, stdev=192.17, samples=19 00:20:52.380 lat (usec) : 750=12.77%, 1000=86.06% 00:20:52.380 lat (msec) : 2=1.14%, 4=0.02%, 10=0.01% 00:20:52.380 cpu : usr=90.17%, sys=8.57%, ctx=11, majf=0, minf=9 00:20:52.380 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:52.380 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:52.380 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:52.380 issued rwts: total=47028,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:52.380 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:52.380 00:20:52.380 Run status group 0 (all jobs): 00:20:52.380 READ: bw=36.7MiB/s (38.5MB/s), 18.4MiB/s-18.4MiB/s (19.3MB/s-19.3MB/s), io=367MiB (385MB), run=10001-10001msec 00:20:52.380 08:33:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:20:52.380 08:33:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:20:52.380 08:33:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:20:52.380 08:33:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:52.380 08:33:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:20:52.380 08:33:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:52.380 08:33:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.380 08:33:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:52.380 08:33:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.380 08:33:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:52.380 08:33:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.380 08:33:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:52.380 08:33:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.380 08:33:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:20:52.380 08:33:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:52.380 08:33:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:20:52.380 08:33:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:52.380 08:33:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.380 08:33:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:52.380 08:33:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.380 08:33:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:52.380 08:33:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.380 08:33:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:52.380 ************************************ 00:20:52.380 END TEST fio_dif_1_multi_subsystems 00:20:52.380 ************************************ 00:20:52.380 08:33:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.380 00:20:52.380 real 0m11.148s 00:20:52.380 user 0m18.824s 00:20:52.380 sys 0m1.957s 00:20:52.380 08:33:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:52.380 08:33:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:52.380 08:33:44 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:20:52.380 08:33:44 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:20:52.380 08:33:44 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:52.380 08:33:44 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:52.380 08:33:44 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:52.380 ************************************ 00:20:52.380 START TEST fio_dif_rand_params 00:20:52.380 ************************************ 00:20:52.380 08:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:20:52.380 08:33:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:20:52.380 08:33:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:20:52.380 08:33:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:20:52.380 08:33:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:20:52.380 08:33:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:20:52.380 08:33:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:20:52.380 08:33:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:20:52.380 08:33:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:20:52.380 08:33:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:20:52.380 08:33:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:52.380 08:33:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:20:52.380 08:33:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:20:52.380 08:33:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:20:52.380 08:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.380 08:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:52.380 bdev_null0 00:20:52.380 08:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.380 08:33:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:52.380 08:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.380 08:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:52.380 08:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.380 08:33:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:52.380 08:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.380 08:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:52.380 08:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.380 08:33:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:52.380 08:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.380 08:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:52.380 [2024-07-15 08:33:44.128863] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:52.380 08:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.380 08:33:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:20:52.380 08:33:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:20:52.380 08:33:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:20:52.380 08:33:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:20:52.380 08:33:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:20:52.380 08:33:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:52.380 08:33:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:52.380 08:33:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:52.380 { 00:20:52.380 "params": { 00:20:52.380 "name": "Nvme$subsystem", 00:20:52.380 "trtype": "$TEST_TRANSPORT", 00:20:52.380 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:52.380 "adrfam": "ipv4", 00:20:52.380 "trsvcid": "$NVMF_PORT", 00:20:52.380 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:52.380 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:52.380 "hdgst": ${hdgst:-false}, 00:20:52.380 "ddgst": ${ddgst:-false} 00:20:52.380 }, 00:20:52.380 "method": "bdev_nvme_attach_controller" 00:20:52.380 } 00:20:52.380 EOF 00:20:52.380 )") 00:20:52.380 08:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:52.380 08:33:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:20:52.380 08:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:52.380 08:33:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:20:52.380 08:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:52.380 08:33:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:20:52.380 08:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:52.380 08:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:52.380 08:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:20:52.380 08:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:52.380 08:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:52.380 08:33:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:20:52.380 08:33:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:20:52.380 08:33:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:52.381 08:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:52.381 08:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:20:52.381 08:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:52.381 08:33:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:20:52.381 08:33:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:20:52.381 08:33:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:52.381 "params": { 00:20:52.381 "name": "Nvme0", 00:20:52.381 "trtype": "tcp", 00:20:52.381 "traddr": "10.0.0.2", 00:20:52.381 "adrfam": "ipv4", 00:20:52.381 "trsvcid": "4420", 00:20:52.381 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:52.381 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:52.381 "hdgst": false, 00:20:52.381 "ddgst": false 00:20:52.381 }, 00:20:52.381 "method": "bdev_nvme_attach_controller" 00:20:52.381 }' 00:20:52.381 08:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:52.381 08:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:52.381 08:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:52.381 08:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:52.381 08:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:52.381 08:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:52.381 08:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:52.381 08:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:52.381 08:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:52.381 08:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:52.381 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:20:52.381 ... 00:20:52.381 fio-3.35 00:20:52.381 Starting 3 threads 00:20:57.719 00:20:57.719 filename0: (groupid=0, jobs=1): err= 0: pid=83574: Mon Jul 15 08:33:49 2024 00:20:57.719 read: IOPS=255, BW=32.0MiB/s (33.5MB/s)(160MiB/5005msec) 00:20:57.719 slat (nsec): min=7196, max=41120, avg=15987.95, stdev=4477.25 00:20:57.719 clat (usec): min=11304, max=12814, avg=11684.29, stdev=202.35 00:20:57.719 lat (usec): min=11314, max=12831, avg=11700.28, stdev=203.04 00:20:57.719 clat percentiles (usec): 00:20:57.719 | 1.00th=[11469], 5.00th=[11469], 10.00th=[11469], 20.00th=[11469], 00:20:57.719 | 30.00th=[11600], 40.00th=[11600], 50.00th=[11600], 60.00th=[11731], 00:20:57.719 | 70.00th=[11731], 80.00th=[11863], 90.00th=[11994], 95.00th=[12125], 00:20:57.719 | 99.00th=[12256], 99.50th=[12256], 99.90th=[12780], 99.95th=[12780], 00:20:57.719 | 99.99th=[12780] 00:20:57.719 bw ( KiB/s): min=31488, max=33024, per=33.35%, avg=32768.00, stdev=543.06, samples=9 00:20:57.719 iops : min= 246, max= 258, avg=256.00, stdev= 4.24, samples=9 00:20:57.719 lat (msec) : 20=100.00% 00:20:57.719 cpu : usr=91.07%, sys=8.39%, ctx=7, majf=0, minf=9 00:20:57.719 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:57.719 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:57.719 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:57.719 issued rwts: total=1281,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:57.719 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:57.719 filename0: (groupid=0, jobs=1): err= 0: pid=83575: Mon Jul 15 08:33:49 2024 00:20:57.719 read: IOPS=256, BW=32.0MiB/s (33.6MB/s)(160MiB/5003msec) 00:20:57.719 slat (nsec): min=7524, max=43299, avg=16205.27, stdev=4379.78 00:20:57.719 clat (usec): min=9421, max=14180, avg=11680.18, stdev=253.60 00:20:57.719 lat (usec): min=9431, max=14205, avg=11696.39, stdev=254.22 00:20:57.719 clat percentiles (usec): 00:20:57.719 | 1.00th=[11469], 5.00th=[11469], 10.00th=[11469], 20.00th=[11469], 00:20:57.719 | 30.00th=[11600], 40.00th=[11600], 50.00th=[11600], 60.00th=[11731], 00:20:57.719 | 70.00th=[11731], 80.00th=[11863], 90.00th=[11994], 95.00th=[12125], 00:20:57.719 | 99.00th=[12256], 99.50th=[12256], 99.90th=[14222], 99.95th=[14222], 00:20:57.719 | 99.99th=[14222] 00:20:57.719 bw ( KiB/s): min=31551, max=33024, per=33.36%, avg=32775.00, stdev=524.59, samples=9 00:20:57.719 iops : min= 246, max= 258, avg=256.00, stdev= 4.24, samples=9 00:20:57.719 lat (msec) : 10=0.23%, 20=99.77% 00:20:57.719 cpu : usr=91.40%, sys=8.02%, ctx=33, majf=0, minf=0 00:20:57.719 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:57.719 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:57.719 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:57.719 issued rwts: total=1281,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:57.719 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:57.719 filename0: (groupid=0, jobs=1): err= 0: pid=83576: Mon Jul 15 08:33:49 2024 00:20:57.719 read: IOPS=255, BW=32.0MiB/s (33.5MB/s)(160MiB/5007msec) 00:20:57.719 slat (nsec): min=5415, max=66242, avg=16040.86, stdev=6445.59 00:20:57.719 clat (usec): min=11282, max=15144, avg=11688.73, stdev=261.32 00:20:57.719 lat (usec): min=11294, max=15209, avg=11704.77, stdev=262.62 00:20:57.719 clat percentiles (usec): 00:20:57.719 | 1.00th=[11469], 5.00th=[11469], 10.00th=[11469], 20.00th=[11469], 00:20:57.719 | 30.00th=[11600], 40.00th=[11600], 50.00th=[11600], 60.00th=[11731], 00:20:57.719 | 70.00th=[11731], 80.00th=[11863], 90.00th=[11994], 95.00th=[12125], 00:20:57.719 | 99.00th=[12256], 99.50th=[12387], 99.90th=[15139], 99.95th=[15139], 00:20:57.719 | 99.99th=[15139] 00:20:57.719 bw ( KiB/s): min=31488, max=33024, per=33.30%, avg=32716.80, stdev=536.99, samples=10 00:20:57.719 iops : min= 246, max= 258, avg=255.60, stdev= 4.20, samples=10 00:20:57.719 lat (msec) : 20=100.00% 00:20:57.719 cpu : usr=90.83%, sys=8.49%, ctx=75, majf=0, minf=9 00:20:57.719 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:57.719 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:57.720 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:57.720 issued rwts: total=1281,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:57.720 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:57.720 00:20:57.720 Run status group 0 (all jobs): 00:20:57.720 READ: bw=95.9MiB/s (101MB/s), 32.0MiB/s-32.0MiB/s (33.5MB/s-33.6MB/s), io=480MiB (504MB), run=5003-5007msec 00:20:58.037 08:33:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:20:58.037 08:33:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:20:58.037 08:33:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:58.037 08:33:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:58.037 08:33:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:20:58.037 08:33:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:58.037 08:33:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.037 08:33:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:58.037 08:33:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.037 08:33:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:58.037 08:33:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.037 08:33:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:58.037 08:33:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.037 08:33:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:20:58.037 08:33:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:20:58.037 08:33:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:20:58.037 08:33:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:20:58.037 08:33:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:20:58.037 08:33:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:20:58.037 08:33:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:20:58.037 08:33:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:20:58.037 08:33:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:58.037 08:33:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:20:58.037 08:33:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:20:58.037 08:33:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:20:58.037 08:33:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.037 08:33:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:58.037 bdev_null0 00:20:58.037 08:33:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.037 08:33:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:58.037 08:33:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.037 08:33:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:58.037 08:33:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.037 08:33:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:58.037 08:33:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.037 08:33:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:58.037 08:33:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.037 08:33:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:58.037 08:33:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.037 08:33:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:58.037 [2024-07-15 08:33:50.143422] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:58.037 08:33:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.037 08:33:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:58.037 08:33:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:20:58.037 08:33:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:20:58.037 08:33:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:20:58.037 08:33:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.037 08:33:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:58.037 bdev_null1 00:20:58.037 08:33:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.037 08:33:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:58.037 08:33:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.037 08:33:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:58.037 08:33:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.038 08:33:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:58.038 08:33:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.038 08:33:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:58.038 08:33:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.038 08:33:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:58.038 08:33:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.038 08:33:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:58.038 08:33:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.038 08:33:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:58.038 08:33:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:20:58.038 08:33:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:20:58.038 08:33:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:20:58.038 08:33:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.038 08:33:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:58.038 bdev_null2 00:20:58.038 08:33:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.038 08:33:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:20:58.038 08:33:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.038 08:33:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:58.038 08:33:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.038 08:33:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:20:58.038 08:33:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.038 08:33:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:58.296 08:33:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.296 08:33:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:58.296 08:33:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.296 08:33:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:58.296 08:33:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.296 08:33:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:20:58.296 08:33:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:20:58.296 08:33:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:20:58.296 08:33:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:20:58.296 08:33:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:20:58.296 08:33:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:58.296 08:33:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:58.296 08:33:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:58.296 { 00:20:58.296 "params": { 00:20:58.296 "name": "Nvme$subsystem", 00:20:58.296 "trtype": "$TEST_TRANSPORT", 00:20:58.296 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:58.296 "adrfam": "ipv4", 00:20:58.296 "trsvcid": "$NVMF_PORT", 00:20:58.296 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:58.296 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:58.296 "hdgst": ${hdgst:-false}, 00:20:58.296 "ddgst": ${ddgst:-false} 00:20:58.296 }, 00:20:58.296 "method": "bdev_nvme_attach_controller" 00:20:58.296 } 00:20:58.296 EOF 00:20:58.296 )") 00:20:58.296 08:33:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:20:58.296 08:33:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:58.296 08:33:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:58.296 08:33:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:20:58.296 08:33:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:58.296 08:33:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:58.296 08:33:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:58.296 08:33:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:20:58.296 08:33:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:58.296 08:33:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:58.296 08:33:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:20:58.296 08:33:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:20:58.296 08:33:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:58.296 08:33:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:20:58.296 08:33:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:58.296 08:33:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:20:58.296 08:33:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:58.296 08:33:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:20:58.296 08:33:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:58.296 08:33:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:58.296 { 00:20:58.296 "params": { 00:20:58.296 "name": "Nvme$subsystem", 00:20:58.296 "trtype": "$TEST_TRANSPORT", 00:20:58.296 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:58.296 "adrfam": "ipv4", 00:20:58.296 "trsvcid": "$NVMF_PORT", 00:20:58.296 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:58.296 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:58.296 "hdgst": ${hdgst:-false}, 00:20:58.296 "ddgst": ${ddgst:-false} 00:20:58.296 }, 00:20:58.296 "method": "bdev_nvme_attach_controller" 00:20:58.296 } 00:20:58.296 EOF 00:20:58.296 )") 00:20:58.296 08:33:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:20:58.296 08:33:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:20:58.296 08:33:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:58.296 08:33:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:20:58.296 08:33:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:58.296 08:33:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:58.296 { 00:20:58.296 "params": { 00:20:58.296 "name": "Nvme$subsystem", 00:20:58.296 "trtype": "$TEST_TRANSPORT", 00:20:58.296 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:58.296 "adrfam": "ipv4", 00:20:58.296 "trsvcid": "$NVMF_PORT", 00:20:58.296 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:58.296 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:58.296 "hdgst": ${hdgst:-false}, 00:20:58.296 "ddgst": ${ddgst:-false} 00:20:58.296 }, 00:20:58.296 "method": "bdev_nvme_attach_controller" 00:20:58.296 } 00:20:58.296 EOF 00:20:58.296 )") 00:20:58.296 08:33:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:20:58.296 08:33:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:58.296 08:33:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:20:58.296 08:33:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:20:58.296 08:33:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:20:58.296 08:33:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:58.296 "params": { 00:20:58.296 "name": "Nvme0", 00:20:58.296 "trtype": "tcp", 00:20:58.296 "traddr": "10.0.0.2", 00:20:58.296 "adrfam": "ipv4", 00:20:58.296 "trsvcid": "4420", 00:20:58.296 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:58.296 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:58.296 "hdgst": false, 00:20:58.296 "ddgst": false 00:20:58.296 }, 00:20:58.296 "method": "bdev_nvme_attach_controller" 00:20:58.296 },{ 00:20:58.296 "params": { 00:20:58.296 "name": "Nvme1", 00:20:58.296 "trtype": "tcp", 00:20:58.296 "traddr": "10.0.0.2", 00:20:58.296 "adrfam": "ipv4", 00:20:58.296 "trsvcid": "4420", 00:20:58.296 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:58.296 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:58.296 "hdgst": false, 00:20:58.296 "ddgst": false 00:20:58.296 }, 00:20:58.296 "method": "bdev_nvme_attach_controller" 00:20:58.296 },{ 00:20:58.296 "params": { 00:20:58.296 "name": "Nvme2", 00:20:58.296 "trtype": "tcp", 00:20:58.296 "traddr": "10.0.0.2", 00:20:58.296 "adrfam": "ipv4", 00:20:58.296 "trsvcid": "4420", 00:20:58.296 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:58.296 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:58.296 "hdgst": false, 00:20:58.296 "ddgst": false 00:20:58.296 }, 00:20:58.296 "method": "bdev_nvme_attach_controller" 00:20:58.296 }' 00:20:58.296 08:33:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:58.296 08:33:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:58.296 08:33:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:58.296 08:33:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:58.296 08:33:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:58.296 08:33:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:58.296 08:33:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:58.296 08:33:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:58.297 08:33:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:58.297 08:33:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:58.297 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:58.297 ... 00:20:58.297 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:58.297 ... 00:20:58.297 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:58.297 ... 00:20:58.297 fio-3.35 00:20:58.297 Starting 24 threads 00:21:10.552 00:21:10.552 filename0: (groupid=0, jobs=1): err= 0: pid=83671: Mon Jul 15 08:34:01 2024 00:21:10.552 read: IOPS=174, BW=696KiB/s (713kB/s)(6980KiB/10027msec) 00:21:10.552 slat (usec): min=8, max=8025, avg=22.63, stdev=254.90 00:21:10.552 clat (msec): min=47, max=206, avg=91.76, stdev=22.94 00:21:10.552 lat (msec): min=47, max=206, avg=91.78, stdev=22.94 00:21:10.552 clat percentiles (msec): 00:21:10.552 | 1.00th=[ 52], 5.00th=[ 61], 10.00th=[ 64], 20.00th=[ 72], 00:21:10.552 | 30.00th=[ 73], 40.00th=[ 84], 50.00th=[ 90], 60.00th=[ 103], 00:21:10.552 | 70.00th=[ 108], 80.00th=[ 110], 90.00th=[ 121], 95.00th=[ 121], 00:21:10.552 | 99.00th=[ 155], 99.50th=[ 197], 99.90th=[ 207], 99.95th=[ 207], 00:21:10.552 | 99.99th=[ 207] 00:21:10.552 bw ( KiB/s): min= 512, max= 896, per=4.08%, avg=693.50, stdev=106.68, samples=20 00:21:10.552 iops : min= 128, max= 224, avg=173.35, stdev=26.62, samples=20 00:21:10.552 lat (msec) : 50=0.23%, 100=59.26%, 250=40.52% 00:21:10.552 cpu : usr=30.82%, sys=2.02%, ctx=865, majf=0, minf=9 00:21:10.552 IO depths : 1=0.1%, 2=1.8%, 4=7.3%, 8=75.6%, 16=15.2%, 32=0.0%, >=64=0.0% 00:21:10.552 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:10.552 complete : 0=0.0%, 4=89.2%, 8=9.2%, 16=1.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:10.552 issued rwts: total=1745,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:10.552 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:10.552 filename0: (groupid=0, jobs=1): err= 0: pid=83672: Mon Jul 15 08:34:01 2024 00:21:10.552 read: IOPS=184, BW=740KiB/s (757kB/s)(7432KiB/10047msec) 00:21:10.552 slat (usec): min=4, max=8025, avg=28.93, stdev=288.55 00:21:10.552 clat (msec): min=21, max=208, avg=86.33, stdev=25.40 00:21:10.552 lat (msec): min=21, max=208, avg=86.36, stdev=25.41 00:21:10.552 clat percentiles (msec): 00:21:10.552 | 1.00th=[ 31], 5.00th=[ 49], 10.00th=[ 58], 20.00th=[ 65], 00:21:10.552 | 30.00th=[ 70], 40.00th=[ 74], 50.00th=[ 83], 60.00th=[ 97], 00:21:10.552 | 70.00th=[ 106], 80.00th=[ 110], 90.00th=[ 117], 95.00th=[ 121], 00:21:10.552 | 99.00th=[ 150], 99.50th=[ 178], 99.90th=[ 209], 99.95th=[ 209], 00:21:10.552 | 99.99th=[ 209] 00:21:10.552 bw ( KiB/s): min= 568, max= 1048, per=4.33%, avg=736.80, stdev=139.80, samples=20 00:21:10.552 iops : min= 142, max= 262, avg=184.20, stdev=34.95, samples=20 00:21:10.552 lat (msec) : 50=6.03%, 100=56.89%, 250=37.08% 00:21:10.552 cpu : usr=42.82%, sys=2.79%, ctx=1206, majf=0, minf=9 00:21:10.552 IO depths : 1=0.1%, 2=0.9%, 4=3.6%, 8=79.9%, 16=15.6%, 32=0.0%, >=64=0.0% 00:21:10.552 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:10.552 complete : 0=0.0%, 4=87.9%, 8=11.3%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:10.552 issued rwts: total=1858,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:10.552 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:10.552 filename0: (groupid=0, jobs=1): err= 0: pid=83673: Mon Jul 15 08:34:01 2024 00:21:10.552 read: IOPS=184, BW=737KiB/s (755kB/s)(7376KiB/10007msec) 00:21:10.552 slat (usec): min=3, max=8032, avg=31.76, stdev=372.93 00:21:10.552 clat (msec): min=8, max=212, avg=86.65, stdev=24.08 00:21:10.552 lat (msec): min=8, max=212, avg=86.68, stdev=24.07 00:21:10.552 clat percentiles (msec): 00:21:10.552 | 1.00th=[ 39], 5.00th=[ 61], 10.00th=[ 61], 20.00th=[ 66], 00:21:10.552 | 30.00th=[ 72], 40.00th=[ 72], 50.00th=[ 84], 60.00th=[ 96], 00:21:10.552 | 70.00th=[ 108], 80.00th=[ 109], 90.00th=[ 117], 95.00th=[ 121], 00:21:10.552 | 99.00th=[ 144], 99.50th=[ 180], 99.90th=[ 213], 99.95th=[ 213], 00:21:10.552 | 99.99th=[ 213] 00:21:10.552 bw ( KiB/s): min= 512, max= 912, per=4.30%, avg=730.53, stdev=99.96, samples=19 00:21:10.552 iops : min= 128, max= 228, avg=182.63, stdev=24.99, samples=19 00:21:10.552 lat (msec) : 10=0.16%, 20=0.33%, 50=2.01%, 100=64.15%, 250=33.35% 00:21:10.552 cpu : usr=32.75%, sys=1.95%, ctx=949, majf=0, minf=9 00:21:10.552 IO depths : 1=0.1%, 2=1.3%, 4=5.2%, 8=78.4%, 16=15.1%, 32=0.0%, >=64=0.0% 00:21:10.552 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:10.552 complete : 0=0.0%, 4=88.2%, 8=10.6%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:10.552 issued rwts: total=1844,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:10.552 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:10.552 filename0: (groupid=0, jobs=1): err= 0: pid=83674: Mon Jul 15 08:34:01 2024 00:21:10.552 read: IOPS=184, BW=736KiB/s (754kB/s)(7400KiB/10051msec) 00:21:10.552 slat (usec): min=6, max=4026, avg=21.61, stdev=179.40 00:21:10.552 clat (msec): min=6, max=203, avg=86.62, stdev=27.54 00:21:10.552 lat (msec): min=6, max=203, avg=86.65, stdev=27.54 00:21:10.552 clat percentiles (msec): 00:21:10.552 | 1.00th=[ 13], 5.00th=[ 43], 10.00th=[ 59], 20.00th=[ 65], 00:21:10.552 | 30.00th=[ 69], 40.00th=[ 78], 50.00th=[ 84], 60.00th=[ 99], 00:21:10.552 | 70.00th=[ 107], 80.00th=[ 112], 90.00th=[ 116], 95.00th=[ 121], 00:21:10.552 | 99.00th=[ 148], 99.50th=[ 197], 99.90th=[ 203], 99.95th=[ 203], 00:21:10.552 | 99.99th=[ 203] 00:21:10.552 bw ( KiB/s): min= 512, max= 1163, per=4.33%, avg=736.15, stdev=172.46, samples=20 00:21:10.552 iops : min= 128, max= 290, avg=184.00, stdev=43.02, samples=20 00:21:10.552 lat (msec) : 10=0.86%, 20=0.86%, 50=5.41%, 100=54.27%, 250=38.59% 00:21:10.552 cpu : usr=40.93%, sys=2.61%, ctx=1315, majf=0, minf=9 00:21:10.552 IO depths : 1=0.1%, 2=0.8%, 4=3.1%, 8=80.1%, 16=16.0%, 32=0.0%, >=64=0.0% 00:21:10.552 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:10.552 complete : 0=0.0%, 4=88.1%, 8=11.2%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:10.552 issued rwts: total=1850,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:10.552 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:10.552 filename0: (groupid=0, jobs=1): err= 0: pid=83675: Mon Jul 15 08:34:01 2024 00:21:10.552 read: IOPS=161, BW=647KiB/s (663kB/s)(6488KiB/10022msec) 00:21:10.552 slat (usec): min=3, max=2031, avg=15.51, stdev=50.36 00:21:10.552 clat (msec): min=26, max=207, avg=98.67, stdev=26.46 00:21:10.552 lat (msec): min=26, max=207, avg=98.69, stdev=26.46 00:21:10.552 clat percentiles (msec): 00:21:10.552 | 1.00th=[ 59], 5.00th=[ 62], 10.00th=[ 65], 20.00th=[ 71], 00:21:10.553 | 30.00th=[ 81], 40.00th=[ 96], 50.00th=[ 107], 60.00th=[ 108], 00:21:10.553 | 70.00th=[ 111], 80.00th=[ 121], 90.00th=[ 131], 95.00th=[ 144], 00:21:10.553 | 99.00th=[ 157], 99.50th=[ 197], 99.90th=[ 209], 99.95th=[ 209], 00:21:10.553 | 99.99th=[ 209] 00:21:10.553 bw ( KiB/s): min= 512, max= 1008, per=3.79%, avg=644.95, stdev=132.57, samples=20 00:21:10.553 iops : min= 128, max= 252, avg=161.20, stdev=33.11, samples=20 00:21:10.553 lat (msec) : 50=0.43%, 100=44.88%, 250=54.69% 00:21:10.553 cpu : usr=32.87%, sys=1.85%, ctx=1449, majf=0, minf=9 00:21:10.553 IO depths : 1=0.1%, 2=3.9%, 4=15.9%, 8=66.1%, 16=14.0%, 32=0.0%, >=64=0.0% 00:21:10.553 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:10.553 complete : 0=0.0%, 4=91.8%, 8=4.7%, 16=3.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:10.553 issued rwts: total=1622,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:10.553 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:10.553 filename0: (groupid=0, jobs=1): err= 0: pid=83676: Mon Jul 15 08:34:01 2024 00:21:10.553 read: IOPS=183, BW=735KiB/s (753kB/s)(7384KiB/10044msec) 00:21:10.553 slat (usec): min=8, max=9022, avg=25.98, stdev=295.72 00:21:10.553 clat (msec): min=38, max=201, avg=86.83, stdev=23.56 00:21:10.553 lat (msec): min=38, max=201, avg=86.86, stdev=23.56 00:21:10.553 clat percentiles (msec): 00:21:10.553 | 1.00th=[ 47], 5.00th=[ 58], 10.00th=[ 61], 20.00th=[ 65], 00:21:10.553 | 30.00th=[ 72], 40.00th=[ 74], 50.00th=[ 82], 60.00th=[ 96], 00:21:10.553 | 70.00th=[ 105], 80.00th=[ 110], 90.00th=[ 115], 95.00th=[ 120], 00:21:10.553 | 99.00th=[ 146], 99.50th=[ 194], 99.90th=[ 203], 99.95th=[ 203], 00:21:10.553 | 99.99th=[ 203] 00:21:10.553 bw ( KiB/s): min= 512, max= 1010, per=4.31%, avg=732.10, stdev=120.41, samples=20 00:21:10.553 iops : min= 128, max= 252, avg=183.00, stdev=30.04, samples=20 00:21:10.553 lat (msec) : 50=3.03%, 100=60.73%, 250=36.24% 00:21:10.553 cpu : usr=40.33%, sys=2.59%, ctx=1180, majf=0, minf=9 00:21:10.553 IO depths : 1=0.1%, 2=1.3%, 4=5.1%, 8=78.2%, 16=15.3%, 32=0.0%, >=64=0.0% 00:21:10.553 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:10.553 complete : 0=0.0%, 4=88.4%, 8=10.5%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:10.553 issued rwts: total=1846,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:10.553 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:10.553 filename0: (groupid=0, jobs=1): err= 0: pid=83677: Mon Jul 15 08:34:01 2024 00:21:10.553 read: IOPS=182, BW=731KiB/s (749kB/s)(7328KiB/10018msec) 00:21:10.553 slat (usec): min=4, max=4024, avg=23.01, stdev=187.26 00:21:10.553 clat (msec): min=18, max=204, avg=87.32, stdev=23.80 00:21:10.553 lat (msec): min=18, max=204, avg=87.34, stdev=23.80 00:21:10.553 clat percentiles (msec): 00:21:10.553 | 1.00th=[ 48], 5.00th=[ 59], 10.00th=[ 63], 20.00th=[ 66], 00:21:10.553 | 30.00th=[ 72], 40.00th=[ 74], 50.00th=[ 82], 60.00th=[ 96], 00:21:10.553 | 70.00th=[ 106], 80.00th=[ 109], 90.00th=[ 116], 95.00th=[ 121], 00:21:10.553 | 99.00th=[ 144], 99.50th=[ 194], 99.90th=[ 205], 99.95th=[ 205], 00:21:10.553 | 99.99th=[ 205] 00:21:10.553 bw ( KiB/s): min= 560, max= 1008, per=4.27%, avg=726.45, stdev=97.71, samples=20 00:21:10.553 iops : min= 140, max= 252, avg=181.60, stdev=24.42, samples=20 00:21:10.553 lat (msec) : 20=0.38%, 50=0.87%, 100=62.01%, 250=36.74% 00:21:10.553 cpu : usr=38.98%, sys=2.22%, ctx=1106, majf=0, minf=9 00:21:10.553 IO depths : 1=0.1%, 2=1.5%, 4=6.1%, 8=77.4%, 16=15.0%, 32=0.0%, >=64=0.0% 00:21:10.553 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:10.553 complete : 0=0.0%, 4=88.5%, 8=10.2%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:10.553 issued rwts: total=1832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:10.553 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:10.553 filename0: (groupid=0, jobs=1): err= 0: pid=83678: Mon Jul 15 08:34:01 2024 00:21:10.553 read: IOPS=178, BW=713KiB/s (731kB/s)(7136KiB/10003msec) 00:21:10.553 slat (usec): min=4, max=8035, avg=32.02, stdev=379.20 00:21:10.553 clat (msec): min=6, max=203, avg=89.53, stdev=25.34 00:21:10.553 lat (msec): min=6, max=203, avg=89.56, stdev=25.33 00:21:10.553 clat percentiles (msec): 00:21:10.553 | 1.00th=[ 30], 5.00th=[ 61], 10.00th=[ 61], 20.00th=[ 67], 00:21:10.553 | 30.00th=[ 72], 40.00th=[ 79], 50.00th=[ 85], 60.00th=[ 101], 00:21:10.553 | 70.00th=[ 108], 80.00th=[ 110], 90.00th=[ 121], 95.00th=[ 123], 00:21:10.553 | 99.00th=[ 155], 99.50th=[ 199], 99.90th=[ 205], 99.95th=[ 205], 00:21:10.553 | 99.99th=[ 205] 00:21:10.553 bw ( KiB/s): min= 512, max= 912, per=4.15%, avg=704.05, stdev=110.14, samples=19 00:21:10.553 iops : min= 128, max= 228, avg=176.00, stdev=27.51, samples=19 00:21:10.553 lat (msec) : 10=0.34%, 20=0.39%, 50=0.73%, 100=58.41%, 250=40.13% 00:21:10.553 cpu : usr=32.11%, sys=1.85%, ctx=943, majf=0, minf=9 00:21:10.553 IO depths : 1=0.1%, 2=2.1%, 4=8.5%, 8=74.7%, 16=14.6%, 32=0.0%, >=64=0.0% 00:21:10.553 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:10.553 complete : 0=0.0%, 4=89.2%, 8=8.9%, 16=1.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:10.553 issued rwts: total=1784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:10.553 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:10.553 filename1: (groupid=0, jobs=1): err= 0: pid=83679: Mon Jul 15 08:34:01 2024 00:21:10.553 read: IOPS=168, BW=672KiB/s (689kB/s)(6728KiB/10005msec) 00:21:10.553 slat (usec): min=4, max=8030, avg=24.18, stdev=239.38 00:21:10.553 clat (msec): min=8, max=203, avg=94.99, stdev=25.46 00:21:10.553 lat (msec): min=8, max=203, avg=95.02, stdev=25.46 00:21:10.553 clat percentiles (msec): 00:21:10.553 | 1.00th=[ 41], 5.00th=[ 61], 10.00th=[ 64], 20.00th=[ 68], 00:21:10.553 | 30.00th=[ 78], 40.00th=[ 87], 50.00th=[ 103], 60.00th=[ 108], 00:21:10.553 | 70.00th=[ 112], 80.00th=[ 114], 90.00th=[ 121], 95.00th=[ 130], 00:21:10.553 | 99.00th=[ 155], 99.50th=[ 199], 99.90th=[ 205], 99.95th=[ 205], 00:21:10.553 | 99.99th=[ 205] 00:21:10.553 bw ( KiB/s): min= 448, max= 912, per=3.90%, avg=662.32, stdev=132.25, samples=19 00:21:10.553 iops : min= 112, max= 228, avg=165.58, stdev=33.06, samples=19 00:21:10.553 lat (msec) : 10=0.18%, 20=0.42%, 50=0.59%, 100=46.97%, 250=51.84% 00:21:10.553 cpu : usr=42.85%, sys=2.44%, ctx=1344, majf=0, minf=9 00:21:10.553 IO depths : 1=0.1%, 2=4.2%, 4=16.6%, 8=65.6%, 16=13.6%, 32=0.0%, >=64=0.0% 00:21:10.553 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:10.553 complete : 0=0.0%, 4=91.7%, 8=4.6%, 16=3.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:10.553 issued rwts: total=1682,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:10.553 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:10.553 filename1: (groupid=0, jobs=1): err= 0: pid=83680: Mon Jul 15 08:34:01 2024 00:21:10.553 read: IOPS=165, BW=661KiB/s (677kB/s)(6652KiB/10061msec) 00:21:10.553 slat (usec): min=4, max=4477, avg=23.00, stdev=190.16 00:21:10.553 clat (msec): min=2, max=202, avg=96.55, stdev=36.71 00:21:10.553 lat (msec): min=2, max=202, avg=96.57, stdev=36.72 00:21:10.553 clat percentiles (msec): 00:21:10.553 | 1.00th=[ 4], 5.00th=[ 9], 10.00th=[ 59], 20.00th=[ 66], 00:21:10.553 | 30.00th=[ 79], 40.00th=[ 101], 50.00th=[ 107], 60.00th=[ 110], 00:21:10.553 | 70.00th=[ 113], 80.00th=[ 121], 90.00th=[ 140], 95.00th=[ 148], 00:21:10.553 | 99.00th=[ 161], 99.50th=[ 197], 99.90th=[ 203], 99.95th=[ 203], 00:21:10.553 | 99.99th=[ 203] 00:21:10.553 bw ( KiB/s): min= 384, max= 1904, per=3.87%, avg=658.55, stdev=323.52, samples=20 00:21:10.553 iops : min= 96, max= 476, avg=164.60, stdev=80.88, samples=20 00:21:10.553 lat (msec) : 4=2.89%, 10=2.77%, 20=1.08%, 50=1.80%, 100=31.39% 00:21:10.553 lat (msec) : 250=60.07% 00:21:10.553 cpu : usr=43.96%, sys=2.48%, ctx=1313, majf=0, minf=0 00:21:10.553 IO depths : 1=0.2%, 2=6.4%, 4=24.7%, 8=56.3%, 16=12.3%, 32=0.0%, >=64=0.0% 00:21:10.553 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:10.553 complete : 0=0.0%, 4=94.4%, 8=0.1%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:10.553 issued rwts: total=1663,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:10.553 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:10.553 filename1: (groupid=0, jobs=1): err= 0: pid=83681: Mon Jul 15 08:34:01 2024 00:21:10.553 read: IOPS=173, BW=696KiB/s (712kB/s)(6992KiB/10052msec) 00:21:10.553 slat (nsec): min=4862, max=56988, avg=13923.59, stdev=4835.62 00:21:10.553 clat (msec): min=11, max=209, avg=91.77, stdev=27.23 00:21:10.553 lat (msec): min=11, max=209, avg=91.79, stdev=27.23 00:21:10.553 clat percentiles (msec): 00:21:10.553 | 1.00th=[ 17], 5.00th=[ 48], 10.00th=[ 61], 20.00th=[ 68], 00:21:10.553 | 30.00th=[ 72], 40.00th=[ 86], 50.00th=[ 96], 60.00th=[ 107], 00:21:10.553 | 70.00th=[ 108], 80.00th=[ 112], 90.00th=[ 121], 95.00th=[ 123], 00:21:10.553 | 99.00th=[ 157], 99.50th=[ 190], 99.90th=[ 205], 99.95th=[ 209], 00:21:10.553 | 99.99th=[ 209] 00:21:10.553 bw ( KiB/s): min= 512, max= 1261, per=4.09%, avg=695.05, stdev=172.02, samples=20 00:21:10.553 iops : min= 128, max= 315, avg=173.75, stdev=42.96, samples=20 00:21:10.553 lat (msec) : 20=1.72%, 50=4.06%, 100=48.46%, 250=45.77% 00:21:10.553 cpu : usr=33.67%, sys=2.11%, ctx=1322, majf=0, minf=9 00:21:10.553 IO depths : 1=0.1%, 2=1.4%, 4=5.4%, 8=76.8%, 16=16.4%, 32=0.0%, >=64=0.0% 00:21:10.553 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:10.553 complete : 0=0.0%, 4=89.4%, 8=9.4%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:10.553 issued rwts: total=1748,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:10.553 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:10.553 filename1: (groupid=0, jobs=1): err= 0: pid=83682: Mon Jul 15 08:34:01 2024 00:21:10.553 read: IOPS=186, BW=745KiB/s (763kB/s)(7484KiB/10043msec) 00:21:10.553 slat (usec): min=4, max=5040, avg=19.00, stdev=148.70 00:21:10.553 clat (msec): min=23, max=209, avg=85.74, stdev=25.87 00:21:10.553 lat (msec): min=23, max=209, avg=85.76, stdev=25.88 00:21:10.553 clat percentiles (msec): 00:21:10.553 | 1.00th=[ 33], 5.00th=[ 48], 10.00th=[ 56], 20.00th=[ 64], 00:21:10.553 | 30.00th=[ 68], 40.00th=[ 74], 50.00th=[ 82], 60.00th=[ 95], 00:21:10.553 | 70.00th=[ 106], 80.00th=[ 111], 90.00th=[ 116], 95.00th=[ 120], 00:21:10.553 | 99.00th=[ 153], 99.50th=[ 194], 99.90th=[ 205], 99.95th=[ 209], 00:21:10.553 | 99.99th=[ 209] 00:21:10.553 bw ( KiB/s): min= 488, max= 1120, per=4.37%, avg=742.00, stdev=159.22, samples=20 00:21:10.553 iops : min= 122, max= 280, avg=185.50, stdev=39.80, samples=20 00:21:10.553 lat (msec) : 50=6.57%, 100=57.83%, 250=35.60% 00:21:10.553 cpu : usr=44.79%, sys=2.63%, ctx=1361, majf=0, minf=9 00:21:10.553 IO depths : 1=0.1%, 2=0.2%, 4=0.6%, 8=83.0%, 16=16.2%, 32=0.0%, >=64=0.0% 00:21:10.553 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:10.553 complete : 0=0.0%, 4=87.3%, 8=12.5%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:10.553 issued rwts: total=1871,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:10.553 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:10.553 filename1: (groupid=0, jobs=1): err= 0: pid=83683: Mon Jul 15 08:34:01 2024 00:21:10.553 read: IOPS=183, BW=734KiB/s (751kB/s)(7340KiB/10004msec) 00:21:10.553 slat (usec): min=3, max=4021, avg=16.55, stdev=93.66 00:21:10.553 clat (msec): min=8, max=203, avg=87.14, stdev=24.12 00:21:10.553 lat (msec): min=8, max=203, avg=87.16, stdev=24.12 00:21:10.553 clat percentiles (msec): 00:21:10.553 | 1.00th=[ 31], 5.00th=[ 61], 10.00th=[ 61], 20.00th=[ 69], 00:21:10.553 | 30.00th=[ 72], 40.00th=[ 73], 50.00th=[ 84], 60.00th=[ 96], 00:21:10.553 | 70.00th=[ 108], 80.00th=[ 108], 90.00th=[ 117], 95.00th=[ 121], 00:21:10.554 | 99.00th=[ 144], 99.50th=[ 201], 99.90th=[ 205], 99.95th=[ 205], 00:21:10.554 | 99.99th=[ 205] 00:21:10.554 bw ( KiB/s): min= 512, max= 912, per=4.27%, avg=725.47, stdev=93.11, samples=19 00:21:10.554 iops : min= 128, max= 228, avg=181.37, stdev=23.28, samples=19 00:21:10.554 lat (msec) : 10=0.33%, 20=0.38%, 50=1.31%, 100=64.47%, 250=33.51% 00:21:10.554 cpu : usr=31.98%, sys=2.17%, ctx=877, majf=0, minf=9 00:21:10.554 IO depths : 1=0.1%, 2=1.4%, 4=5.4%, 8=78.2%, 16=15.0%, 32=0.0%, >=64=0.0% 00:21:10.554 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:10.554 complete : 0=0.0%, 4=88.2%, 8=10.6%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:10.554 issued rwts: total=1835,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:10.554 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:10.554 filename1: (groupid=0, jobs=1): err= 0: pid=83684: Mon Jul 15 08:34:01 2024 00:21:10.554 read: IOPS=190, BW=761KiB/s (779kB/s)(7616KiB/10009msec) 00:21:10.554 slat (usec): min=8, max=9041, avg=31.61, stdev=322.63 00:21:10.554 clat (msec): min=15, max=204, avg=83.93, stdev=26.01 00:21:10.554 lat (msec): min=15, max=204, avg=83.96, stdev=26.01 00:21:10.554 clat percentiles (msec): 00:21:10.554 | 1.00th=[ 25], 5.00th=[ 47], 10.00th=[ 55], 20.00th=[ 63], 00:21:10.554 | 30.00th=[ 69], 40.00th=[ 72], 50.00th=[ 81], 60.00th=[ 90], 00:21:10.554 | 70.00th=[ 105], 80.00th=[ 109], 90.00th=[ 115], 95.00th=[ 121], 00:21:10.554 | 99.00th=[ 146], 99.50th=[ 174], 99.90th=[ 205], 99.95th=[ 205], 00:21:10.554 | 99.99th=[ 205] 00:21:10.554 bw ( KiB/s): min= 512, max= 1128, per=4.46%, avg=757.47, stdev=149.93, samples=19 00:21:10.554 iops : min= 128, max= 282, avg=189.37, stdev=37.48, samples=19 00:21:10.554 lat (msec) : 20=0.63%, 50=7.56%, 100=57.98%, 250=33.82% 00:21:10.554 cpu : usr=41.44%, sys=2.66%, ctx=1305, majf=0, minf=9 00:21:10.554 IO depths : 1=0.1%, 2=0.2%, 4=0.6%, 8=83.5%, 16=15.8%, 32=0.0%, >=64=0.0% 00:21:10.554 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:10.554 complete : 0=0.0%, 4=86.8%, 8=13.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:10.554 issued rwts: total=1904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:10.554 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:10.554 filename1: (groupid=0, jobs=1): err= 0: pid=83685: Mon Jul 15 08:34:01 2024 00:21:10.554 read: IOPS=174, BW=696KiB/s (713kB/s)(6980KiB/10028msec) 00:21:10.554 slat (usec): min=7, max=8029, avg=28.46, stdev=335.80 00:21:10.554 clat (msec): min=32, max=204, avg=91.73, stdev=23.81 00:21:10.554 lat (msec): min=32, max=204, avg=91.76, stdev=23.81 00:21:10.554 clat percentiles (msec): 00:21:10.554 | 1.00th=[ 58], 5.00th=[ 61], 10.00th=[ 63], 20.00th=[ 72], 00:21:10.554 | 30.00th=[ 72], 40.00th=[ 83], 50.00th=[ 88], 60.00th=[ 102], 00:21:10.554 | 70.00th=[ 108], 80.00th=[ 109], 90.00th=[ 121], 95.00th=[ 123], 00:21:10.554 | 99.00th=[ 157], 99.50th=[ 188], 99.90th=[ 205], 99.95th=[ 205], 00:21:10.554 | 99.99th=[ 205] 00:21:10.554 bw ( KiB/s): min= 512, max= 896, per=4.08%, avg=693.70, stdev=100.26, samples=20 00:21:10.554 iops : min= 128, max= 224, avg=173.40, stdev=25.01, samples=20 00:21:10.554 lat (msec) : 50=0.63%, 100=59.03%, 250=40.34% 00:21:10.554 cpu : usr=31.14%, sys=1.67%, ctx=869, majf=0, minf=9 00:21:10.554 IO depths : 1=0.1%, 2=1.8%, 4=7.0%, 8=75.9%, 16=15.1%, 32=0.0%, >=64=0.0% 00:21:10.554 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:10.554 complete : 0=0.0%, 4=89.0%, 8=9.5%, 16=1.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:10.554 issued rwts: total=1745,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:10.554 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:10.554 filename1: (groupid=0, jobs=1): err= 0: pid=83686: Mon Jul 15 08:34:01 2024 00:21:10.554 read: IOPS=164, BW=658KiB/s (673kB/s)(6608KiB/10047msec) 00:21:10.554 slat (usec): min=3, max=8027, avg=30.80, stdev=342.21 00:21:10.554 clat (msec): min=19, max=203, avg=97.05, stdev=25.61 00:21:10.554 lat (msec): min=19, max=203, avg=97.08, stdev=25.62 00:21:10.554 clat percentiles (msec): 00:21:10.554 | 1.00th=[ 54], 5.00th=[ 61], 10.00th=[ 64], 20.00th=[ 72], 00:21:10.554 | 30.00th=[ 78], 40.00th=[ 95], 50.00th=[ 106], 60.00th=[ 108], 00:21:10.554 | 70.00th=[ 109], 80.00th=[ 116], 90.00th=[ 121], 95.00th=[ 133], 00:21:10.554 | 99.00th=[ 157], 99.50th=[ 201], 99.90th=[ 203], 99.95th=[ 203], 00:21:10.554 | 99.99th=[ 203] 00:21:10.554 bw ( KiB/s): min= 512, max= 1008, per=3.85%, avg=654.40, stdev=139.67, samples=20 00:21:10.554 iops : min= 128, max= 252, avg=163.60, stdev=34.92, samples=20 00:21:10.554 lat (msec) : 20=0.85%, 50=0.12%, 100=43.89%, 250=55.15% 00:21:10.554 cpu : usr=33.83%, sys=2.24%, ctx=1000, majf=0, minf=9 00:21:10.554 IO depths : 1=0.1%, 2=4.1%, 4=16.2%, 8=65.9%, 16=13.9%, 32=0.0%, >=64=0.0% 00:21:10.554 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:10.554 complete : 0=0.0%, 4=91.8%, 8=4.6%, 16=3.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:10.554 issued rwts: total=1652,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:10.554 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:10.554 filename2: (groupid=0, jobs=1): err= 0: pid=83687: Mon Jul 15 08:34:01 2024 00:21:10.554 read: IOPS=189, BW=757KiB/s (775kB/s)(7572KiB/10009msec) 00:21:10.554 slat (usec): min=8, max=8026, avg=26.55, stdev=318.75 00:21:10.554 clat (msec): min=8, max=203, avg=84.49, stdev=26.88 00:21:10.554 lat (msec): min=8, max=203, avg=84.52, stdev=26.87 00:21:10.554 clat percentiles (msec): 00:21:10.554 | 1.00th=[ 24], 5.00th=[ 48], 10.00th=[ 57], 20.00th=[ 62], 00:21:10.554 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 82], 60.00th=[ 86], 00:21:10.554 | 70.00th=[ 108], 80.00th=[ 108], 90.00th=[ 117], 95.00th=[ 121], 00:21:10.554 | 99.00th=[ 144], 99.50th=[ 203], 99.90th=[ 205], 99.95th=[ 205], 00:21:10.554 | 99.99th=[ 205] 00:21:10.554 bw ( KiB/s): min= 488, max= 1096, per=4.42%, avg=751.21, stdev=140.46, samples=19 00:21:10.554 iops : min= 122, max= 274, avg=187.79, stdev=35.11, samples=19 00:21:10.554 lat (msec) : 10=0.21%, 20=0.32%, 50=8.14%, 100=58.53%, 250=32.81% 00:21:10.554 cpu : usr=31.15%, sys=1.76%, ctx=865, majf=0, minf=9 00:21:10.554 IO depths : 1=0.1%, 2=0.2%, 4=0.6%, 8=83.4%, 16=15.8%, 32=0.0%, >=64=0.0% 00:21:10.554 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:10.554 complete : 0=0.0%, 4=86.9%, 8=13.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:10.554 issued rwts: total=1893,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:10.554 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:10.554 filename2: (groupid=0, jobs=1): err= 0: pid=83688: Mon Jul 15 08:34:01 2024 00:21:10.554 read: IOPS=151, BW=607KiB/s (622kB/s)(6096KiB/10036msec) 00:21:10.554 slat (usec): min=8, max=12026, avg=32.69, stdev=394.61 00:21:10.554 clat (msec): min=46, max=272, avg=105.05, stdev=27.84 00:21:10.554 lat (msec): min=46, max=272, avg=105.08, stdev=27.85 00:21:10.554 clat percentiles (msec): 00:21:10.554 | 1.00th=[ 58], 5.00th=[ 61], 10.00th=[ 63], 20.00th=[ 75], 00:21:10.554 | 30.00th=[ 97], 40.00th=[ 105], 50.00th=[ 108], 60.00th=[ 110], 00:21:10.554 | 70.00th=[ 116], 80.00th=[ 121], 90.00th=[ 144], 95.00th=[ 144], 00:21:10.554 | 99.00th=[ 211], 99.50th=[ 211], 99.90th=[ 271], 99.95th=[ 271], 00:21:10.554 | 99.99th=[ 271] 00:21:10.554 bw ( KiB/s): min= 368, max= 912, per=3.54%, avg=602.85, stdev=152.82, samples=20 00:21:10.554 iops : min= 92, max= 228, avg=150.70, stdev=38.18, samples=20 00:21:10.554 lat (msec) : 50=0.13%, 100=35.63%, 250=64.11%, 500=0.13% 00:21:10.554 cpu : usr=39.39%, sys=2.12%, ctx=1190, majf=0, minf=9 00:21:10.554 IO depths : 1=0.1%, 2=6.2%, 4=24.8%, 8=56.4%, 16=12.5%, 32=0.0%, >=64=0.0% 00:21:10.554 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:10.554 complete : 0=0.0%, 4=94.5%, 8=0.0%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:10.554 issued rwts: total=1524,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:10.554 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:10.554 filename2: (groupid=0, jobs=1): err= 0: pid=83689: Mon Jul 15 08:34:01 2024 00:21:10.554 read: IOPS=186, BW=748KiB/s (766kB/s)(7516KiB/10052msec) 00:21:10.554 slat (usec): min=4, max=8024, avg=19.30, stdev=196.23 00:21:10.554 clat (msec): min=7, max=202, avg=85.40, stdev=28.76 00:21:10.554 lat (msec): min=7, max=202, avg=85.41, stdev=28.75 00:21:10.554 clat percentiles (msec): 00:21:10.554 | 1.00th=[ 10], 5.00th=[ 36], 10.00th=[ 49], 20.00th=[ 62], 00:21:10.554 | 30.00th=[ 71], 40.00th=[ 73], 50.00th=[ 84], 60.00th=[ 97], 00:21:10.554 | 70.00th=[ 108], 80.00th=[ 109], 90.00th=[ 120], 95.00th=[ 121], 00:21:10.554 | 99.00th=[ 155], 99.50th=[ 194], 99.90th=[ 203], 99.95th=[ 203], 00:21:10.554 | 99.99th=[ 203] 00:21:10.554 bw ( KiB/s): min= 536, max= 1464, per=4.39%, avg=746.80, stdev=219.06, samples=20 00:21:10.554 iops : min= 134, max= 366, avg=186.70, stdev=54.77, samples=20 00:21:10.554 lat (msec) : 10=1.49%, 20=0.37%, 50=8.46%, 100=52.05%, 250=37.63% 00:21:10.554 cpu : usr=32.89%, sys=2.27%, ctx=964, majf=0, minf=9 00:21:10.554 IO depths : 1=0.1%, 2=0.2%, 4=0.8%, 8=82.4%, 16=16.5%, 32=0.0%, >=64=0.0% 00:21:10.554 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:10.554 complete : 0=0.0%, 4=87.6%, 8=12.2%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:10.554 issued rwts: total=1879,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:10.554 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:10.554 filename2: (groupid=0, jobs=1): err= 0: pid=83690: Mon Jul 15 08:34:01 2024 00:21:10.554 read: IOPS=185, BW=741KiB/s (759kB/s)(7420KiB/10016msec) 00:21:10.554 slat (usec): min=4, max=8024, avg=18.20, stdev=186.07 00:21:10.554 clat (msec): min=18, max=207, avg=86.28, stdev=23.59 00:21:10.554 lat (msec): min=18, max=207, avg=86.29, stdev=23.59 00:21:10.554 clat percentiles (msec): 00:21:10.554 | 1.00th=[ 40], 5.00th=[ 60], 10.00th=[ 61], 20.00th=[ 67], 00:21:10.554 | 30.00th=[ 71], 40.00th=[ 73], 50.00th=[ 83], 60.00th=[ 91], 00:21:10.554 | 70.00th=[ 106], 80.00th=[ 109], 90.00th=[ 115], 95.00th=[ 121], 00:21:10.554 | 99.00th=[ 148], 99.50th=[ 192], 99.90th=[ 207], 99.95th=[ 207], 00:21:10.554 | 99.99th=[ 207] 00:21:10.554 bw ( KiB/s): min= 560, max= 1024, per=4.34%, avg=737.70, stdev=100.86, samples=20 00:21:10.554 iops : min= 140, max= 256, avg=184.40, stdev=25.24, samples=20 00:21:10.554 lat (msec) : 20=0.38%, 50=1.89%, 100=64.26%, 250=33.48% 00:21:10.554 cpu : usr=32.20%, sys=1.72%, ctx=958, majf=0, minf=9 00:21:10.554 IO depths : 1=0.1%, 2=1.3%, 4=5.1%, 8=78.5%, 16=15.0%, 32=0.0%, >=64=0.0% 00:21:10.554 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:10.554 complete : 0=0.0%, 4=88.1%, 8=10.8%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:10.554 issued rwts: total=1855,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:10.554 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:10.554 filename2: (groupid=0, jobs=1): err= 0: pid=83691: Mon Jul 15 08:34:01 2024 00:21:10.554 read: IOPS=175, BW=701KiB/s (717kB/s)(7036KiB/10043msec) 00:21:10.554 slat (usec): min=6, max=4025, avg=16.64, stdev=95.74 00:21:10.554 clat (msec): min=35, max=201, avg=91.17, stdev=24.35 00:21:10.554 lat (msec): min=35, max=201, avg=91.19, stdev=24.35 00:21:10.554 clat percentiles (msec): 00:21:10.554 | 1.00th=[ 41], 5.00th=[ 60], 10.00th=[ 64], 20.00th=[ 69], 00:21:10.554 | 30.00th=[ 72], 40.00th=[ 81], 50.00th=[ 90], 60.00th=[ 105], 00:21:10.554 | 70.00th=[ 108], 80.00th=[ 111], 90.00th=[ 118], 95.00th=[ 128], 00:21:10.554 | 99.00th=[ 153], 99.50th=[ 190], 99.90th=[ 201], 99.95th=[ 201], 00:21:10.554 | 99.99th=[ 201] 00:21:10.554 bw ( KiB/s): min= 512, max= 1021, per=4.10%, avg=697.05, stdev=132.02, samples=20 00:21:10.555 iops : min= 128, max= 255, avg=174.25, stdev=32.97, samples=20 00:21:10.555 lat (msec) : 50=1.71%, 100=54.80%, 250=43.49% 00:21:10.555 cpu : usr=42.24%, sys=2.58%, ctx=1291, majf=0, minf=9 00:21:10.555 IO depths : 1=0.1%, 2=2.5%, 4=10.1%, 8=72.6%, 16=14.7%, 32=0.0%, >=64=0.0% 00:21:10.555 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:10.555 complete : 0=0.0%, 4=89.9%, 8=7.8%, 16=2.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:10.555 issued rwts: total=1759,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:10.555 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:10.555 filename2: (groupid=0, jobs=1): err= 0: pid=83692: Mon Jul 15 08:34:01 2024 00:21:10.555 read: IOPS=182, BW=731KiB/s (749kB/s)(7340KiB/10036msec) 00:21:10.555 slat (usec): min=6, max=8025, avg=35.99, stdev=349.46 00:21:10.555 clat (msec): min=36, max=205, avg=87.28, stdev=23.14 00:21:10.555 lat (msec): min=36, max=205, avg=87.31, stdev=23.15 00:21:10.555 clat percentiles (msec): 00:21:10.555 | 1.00th=[ 48], 5.00th=[ 59], 10.00th=[ 62], 20.00th=[ 67], 00:21:10.555 | 30.00th=[ 71], 40.00th=[ 73], 50.00th=[ 83], 60.00th=[ 96], 00:21:10.555 | 70.00th=[ 106], 80.00th=[ 109], 90.00th=[ 116], 95.00th=[ 121], 00:21:10.555 | 99.00th=[ 144], 99.50th=[ 194], 99.90th=[ 205], 99.95th=[ 205], 00:21:10.555 | 99.99th=[ 205] 00:21:10.555 bw ( KiB/s): min= 512, max= 960, per=4.28%, avg=727.20, stdev=108.86, samples=20 00:21:10.555 iops : min= 128, max= 240, avg=181.80, stdev=27.21, samples=20 00:21:10.555 lat (msec) : 50=1.36%, 100=63.43%, 250=35.20% 00:21:10.555 cpu : usr=38.07%, sys=2.18%, ctx=1150, majf=0, minf=9 00:21:10.555 IO depths : 1=0.1%, 2=1.3%, 4=5.2%, 8=78.3%, 16=15.1%, 32=0.0%, >=64=0.0% 00:21:10.555 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:10.555 complete : 0=0.0%, 4=88.3%, 8=10.6%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:10.555 issued rwts: total=1835,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:10.555 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:10.555 filename2: (groupid=0, jobs=1): err= 0: pid=83693: Mon Jul 15 08:34:01 2024 00:21:10.555 read: IOPS=176, BW=705KiB/s (722kB/s)(7100KiB/10068msec) 00:21:10.555 slat (usec): min=4, max=4018, avg=16.18, stdev=95.19 00:21:10.555 clat (msec): min=2, max=203, avg=90.49, stdev=32.00 00:21:10.555 lat (msec): min=2, max=203, avg=90.51, stdev=32.00 00:21:10.555 clat percentiles (msec): 00:21:10.555 | 1.00th=[ 3], 5.00th=[ 9], 10.00th=[ 58], 20.00th=[ 66], 00:21:10.555 | 30.00th=[ 73], 40.00th=[ 90], 50.00th=[ 104], 60.00th=[ 107], 00:21:10.555 | 70.00th=[ 110], 80.00th=[ 113], 90.00th=[ 121], 95.00th=[ 127], 00:21:10.555 | 99.00th=[ 150], 99.50th=[ 194], 99.90th=[ 203], 99.95th=[ 203], 00:21:10.555 | 99.99th=[ 203] 00:21:10.555 bw ( KiB/s): min= 464, max= 1904, per=4.14%, avg=703.35, stdev=303.38, samples=20 00:21:10.555 iops : min= 116, max= 476, avg=175.80, stdev=75.86, samples=20 00:21:10.555 lat (msec) : 4=2.70%, 10=2.59%, 20=1.01%, 50=1.69%, 100=40.85% 00:21:10.555 lat (msec) : 250=51.15% 00:21:10.555 cpu : usr=40.16%, sys=2.69%, ctx=1336, majf=0, minf=0 00:21:10.555 IO depths : 1=0.2%, 2=4.6%, 4=17.7%, 8=63.9%, 16=13.5%, 32=0.0%, >=64=0.0% 00:21:10.555 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:10.555 complete : 0=0.0%, 4=92.3%, 8=3.8%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:10.555 issued rwts: total=1775,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:10.555 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:10.555 filename2: (groupid=0, jobs=1): err= 0: pid=83694: Mon Jul 15 08:34:01 2024 00:21:10.555 read: IOPS=173, BW=695KiB/s (711kB/s)(6980KiB/10049msec) 00:21:10.555 slat (usec): min=5, max=6025, avg=17.93, stdev=143.97 00:21:10.555 clat (msec): min=19, max=203, avg=91.97, stdev=25.80 00:21:10.555 lat (msec): min=19, max=203, avg=91.99, stdev=25.80 00:21:10.555 clat percentiles (msec): 00:21:10.555 | 1.00th=[ 34], 5.00th=[ 58], 10.00th=[ 63], 20.00th=[ 68], 00:21:10.555 | 30.00th=[ 72], 40.00th=[ 83], 50.00th=[ 95], 60.00th=[ 105], 00:21:10.555 | 70.00th=[ 108], 80.00th=[ 112], 90.00th=[ 121], 95.00th=[ 132], 00:21:10.555 | 99.00th=[ 157], 99.50th=[ 197], 99.90th=[ 205], 99.95th=[ 205], 00:21:10.555 | 99.99th=[ 205] 00:21:10.555 bw ( KiB/s): min= 512, max= 1136, per=4.07%, avg=691.60, stdev=136.77, samples=20 00:21:10.555 iops : min= 128, max= 284, avg=172.90, stdev=34.19, samples=20 00:21:10.555 lat (msec) : 20=0.80%, 50=2.06%, 100=51.58%, 250=45.56% 00:21:10.555 cpu : usr=39.14%, sys=2.49%, ctx=1121, majf=0, minf=9 00:21:10.555 IO depths : 1=0.1%, 2=2.5%, 4=10.0%, 8=72.7%, 16=14.7%, 32=0.0%, >=64=0.0% 00:21:10.555 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:10.555 complete : 0=0.0%, 4=89.9%, 8=7.9%, 16=2.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:10.555 issued rwts: total=1745,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:10.555 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:10.555 00:21:10.555 Run status group 0 (all jobs): 00:21:10.555 READ: bw=16.6MiB/s (17.4MB/s), 607KiB/s-761KiB/s (622kB/s-779kB/s), io=167MiB (175MB), run=10003-10068msec 00:21:10.555 08:34:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:21:10.555 08:34:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:21:10.555 08:34:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:21:10.555 08:34:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:10.555 08:34:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:21:10.555 08:34:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:10.555 08:34:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.555 08:34:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:10.555 08:34:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.555 08:34:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:10.555 08:34:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.555 08:34:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:10.555 08:34:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.555 08:34:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:21:10.555 08:34:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:21:10.555 08:34:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:21:10.555 08:34:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:10.555 08:34:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.555 08:34:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:10.555 08:34:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.555 08:34:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:21:10.555 08:34:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.555 08:34:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:10.555 08:34:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.555 08:34:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:21:10.555 08:34:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:21:10.555 08:34:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:21:10.555 08:34:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:10.555 08:34:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.555 08:34:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:10.555 08:34:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.555 08:34:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:21:10.555 08:34:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.555 08:34:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:10.555 08:34:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.555 08:34:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:21:10.555 08:34:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:21:10.555 08:34:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:21:10.555 08:34:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:21:10.555 08:34:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:21:10.555 08:34:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:21:10.555 08:34:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:21:10.555 08:34:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:21:10.555 08:34:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:21:10.555 08:34:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:21:10.555 08:34:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:21:10.555 08:34:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:21:10.555 08:34:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.555 08:34:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:10.555 bdev_null0 00:21:10.555 08:34:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.555 08:34:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:10.555 08:34:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.555 08:34:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:10.555 08:34:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.555 08:34:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:10.555 08:34:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.555 08:34:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:10.555 08:34:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.555 08:34:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:10.555 08:34:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.555 08:34:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:10.555 [2024-07-15 08:34:01.505235] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:10.555 08:34:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.555 08:34:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:21:10.555 08:34:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:21:10.555 08:34:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:21:10.555 08:34:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:21:10.555 08:34:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.555 08:34:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:10.555 bdev_null1 00:21:10.555 08:34:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.555 08:34:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:21:10.555 08:34:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.556 08:34:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:10.556 08:34:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.556 08:34:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:21:10.556 08:34:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.556 08:34:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:10.556 08:34:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.556 08:34:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:10.556 08:34:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.556 08:34:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:10.556 08:34:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.556 08:34:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:21:10.556 08:34:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:21:10.556 08:34:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:21:10.556 08:34:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:21:10.556 08:34:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:21:10.556 08:34:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:10.556 08:34:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:10.556 { 00:21:10.556 "params": { 00:21:10.556 "name": "Nvme$subsystem", 00:21:10.556 "trtype": "$TEST_TRANSPORT", 00:21:10.556 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:10.556 "adrfam": "ipv4", 00:21:10.556 "trsvcid": "$NVMF_PORT", 00:21:10.556 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:10.556 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:10.556 "hdgst": ${hdgst:-false}, 00:21:10.556 "ddgst": ${ddgst:-false} 00:21:10.556 }, 00:21:10.556 "method": "bdev_nvme_attach_controller" 00:21:10.556 } 00:21:10.556 EOF 00:21:10.556 )") 00:21:10.556 08:34:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:10.556 08:34:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:10.556 08:34:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:21:10.556 08:34:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:21:10.556 08:34:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:21:10.556 08:34:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:10.556 08:34:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:10.556 08:34:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:21:10.556 08:34:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:10.556 08:34:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:10.556 08:34:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:21:10.556 08:34:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:10.556 08:34:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:10.556 08:34:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:21:10.556 08:34:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:10.556 08:34:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:21:10.556 08:34:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:10.556 { 00:21:10.556 "params": { 00:21:10.556 "name": "Nvme$subsystem", 00:21:10.556 "trtype": "$TEST_TRANSPORT", 00:21:10.556 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:10.556 "adrfam": "ipv4", 00:21:10.556 "trsvcid": "$NVMF_PORT", 00:21:10.556 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:10.556 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:10.556 "hdgst": ${hdgst:-false}, 00:21:10.556 "ddgst": ${ddgst:-false} 00:21:10.556 }, 00:21:10.556 "method": "bdev_nvme_attach_controller" 00:21:10.556 } 00:21:10.556 EOF 00:21:10.556 )") 00:21:10.556 08:34:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:10.556 08:34:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:21:10.556 08:34:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:10.556 08:34:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:21:10.556 08:34:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:21:10.556 08:34:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:21:10.556 08:34:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:21:10.556 08:34:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:21:10.556 08:34:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:21:10.556 08:34:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:10.556 "params": { 00:21:10.556 "name": "Nvme0", 00:21:10.556 "trtype": "tcp", 00:21:10.556 "traddr": "10.0.0.2", 00:21:10.556 "adrfam": "ipv4", 00:21:10.556 "trsvcid": "4420", 00:21:10.556 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:10.556 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:10.556 "hdgst": false, 00:21:10.556 "ddgst": false 00:21:10.556 }, 00:21:10.556 "method": "bdev_nvme_attach_controller" 00:21:10.556 },{ 00:21:10.556 "params": { 00:21:10.556 "name": "Nvme1", 00:21:10.556 "trtype": "tcp", 00:21:10.556 "traddr": "10.0.0.2", 00:21:10.556 "adrfam": "ipv4", 00:21:10.556 "trsvcid": "4420", 00:21:10.556 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:10.556 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:10.556 "hdgst": false, 00:21:10.556 "ddgst": false 00:21:10.556 }, 00:21:10.556 "method": "bdev_nvme_attach_controller" 00:21:10.556 }' 00:21:10.556 08:34:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:10.556 08:34:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:10.556 08:34:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:10.556 08:34:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:10.556 08:34:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:21:10.556 08:34:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:10.556 08:34:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:10.556 08:34:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:10.556 08:34:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:10.556 08:34:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:10.556 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:21:10.556 ... 00:21:10.556 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:21:10.556 ... 00:21:10.556 fio-3.35 00:21:10.556 Starting 4 threads 00:21:15.828 00:21:15.828 filename0: (groupid=0, jobs=1): err= 0: pid=83828: Mon Jul 15 08:34:07 2024 00:21:15.828 read: IOPS=2096, BW=16.4MiB/s (17.2MB/s)(81.9MiB/5002msec) 00:21:15.828 slat (nsec): min=6157, max=43060, avg=15303.89, stdev=3222.85 00:21:15.828 clat (usec): min=1264, max=7039, avg=3766.77, stdev=662.27 00:21:15.828 lat (usec): min=1278, max=7057, avg=3782.07, stdev=662.13 00:21:15.828 clat percentiles (usec): 00:21:15.828 | 1.00th=[ 2089], 5.00th=[ 2573], 10.00th=[ 2966], 20.00th=[ 3294], 00:21:15.828 | 30.00th=[ 3359], 40.00th=[ 3720], 50.00th=[ 3818], 60.00th=[ 3884], 00:21:15.828 | 70.00th=[ 3916], 80.00th=[ 4228], 90.00th=[ 4752], 95.00th=[ 5014], 00:21:15.828 | 99.00th=[ 5211], 99.50th=[ 5276], 99.90th=[ 5735], 99.95th=[ 5932], 00:21:15.828 | 99.99th=[ 6456] 00:21:15.828 bw ( KiB/s): min=15552, max=18128, per=25.38%, avg=16837.22, stdev=926.63, samples=9 00:21:15.828 iops : min= 1944, max= 2266, avg=2104.56, stdev=115.90, samples=9 00:21:15.828 lat (msec) : 2=0.81%, 4=70.64%, 10=28.55% 00:21:15.828 cpu : usr=92.44%, sys=6.74%, ctx=5, majf=0, minf=9 00:21:15.828 IO depths : 1=0.1%, 2=11.1%, 4=61.8%, 8=27.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:15.828 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:15.828 complete : 0=0.0%, 4=95.6%, 8=4.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:15.828 issued rwts: total=10489,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:15.828 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:15.828 filename0: (groupid=0, jobs=1): err= 0: pid=83829: Mon Jul 15 08:34:07 2024 00:21:15.828 read: IOPS=2132, BW=16.7MiB/s (17.5MB/s)(83.3MiB/5001msec) 00:21:15.828 slat (nsec): min=7237, max=47003, avg=11781.68, stdev=3574.53 00:21:15.828 clat (usec): min=653, max=6868, avg=3714.59, stdev=711.76 00:21:15.828 lat (usec): min=663, max=6880, avg=3726.37, stdev=712.21 00:21:15.828 clat percentiles (usec): 00:21:15.828 | 1.00th=[ 1450], 5.00th=[ 2573], 10.00th=[ 2638], 20.00th=[ 3294], 00:21:15.828 | 30.00th=[ 3359], 40.00th=[ 3621], 50.00th=[ 3818], 60.00th=[ 3884], 00:21:15.828 | 70.00th=[ 3916], 80.00th=[ 4178], 90.00th=[ 4752], 95.00th=[ 5014], 00:21:15.828 | 99.00th=[ 5211], 99.50th=[ 5276], 99.90th=[ 6194], 99.95th=[ 6194], 00:21:15.828 | 99.99th=[ 6849] 00:21:15.828 bw ( KiB/s): min=15552, max=18848, per=25.85%, avg=17150.22, stdev=1160.95, samples=9 00:21:15.828 iops : min= 1944, max= 2356, avg=2143.78, stdev=145.12, samples=9 00:21:15.828 lat (usec) : 750=0.04%, 1000=0.03% 00:21:15.828 lat (msec) : 2=2.23%, 4=71.10%, 10=26.61% 00:21:15.828 cpu : usr=91.74%, sys=7.46%, ctx=6, majf=0, minf=0 00:21:15.828 IO depths : 1=0.1%, 2=10.0%, 4=62.4%, 8=27.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:15.828 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:15.828 complete : 0=0.0%, 4=96.1%, 8=3.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:15.828 issued rwts: total=10663,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:15.828 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:15.828 filename1: (groupid=0, jobs=1): err= 0: pid=83830: Mon Jul 15 08:34:07 2024 00:21:15.828 read: IOPS=2108, BW=16.5MiB/s (17.3MB/s)(82.4MiB/5002msec) 00:21:15.828 slat (usec): min=5, max=106, avg=14.53, stdev= 3.81 00:21:15.828 clat (usec): min=755, max=6786, avg=3748.28, stdev=903.77 00:21:15.828 lat (usec): min=765, max=6797, avg=3762.82, stdev=903.75 00:21:15.828 clat percentiles (usec): 00:21:15.828 | 1.00th=[ 1385], 5.00th=[ 2540], 10.00th=[ 2671], 20.00th=[ 3294], 00:21:15.828 | 30.00th=[ 3326], 40.00th=[ 3621], 50.00th=[ 3818], 60.00th=[ 3884], 00:21:15.828 | 70.00th=[ 3916], 80.00th=[ 4178], 90.00th=[ 4948], 95.00th=[ 5211], 00:21:15.828 | 99.00th=[ 6128], 99.50th=[ 6194], 99.90th=[ 6325], 99.95th=[ 6456], 00:21:15.828 | 99.99th=[ 6521] 00:21:15.828 bw ( KiB/s): min=12704, max=19616, per=25.52%, avg=16931.44, stdev=1945.84, samples=9 00:21:15.828 iops : min= 1588, max= 2452, avg=2116.33, stdev=243.16, samples=9 00:21:15.828 lat (usec) : 1000=0.36% 00:21:15.828 lat (msec) : 2=4.22%, 4=69.74%, 10=25.68% 00:21:15.828 cpu : usr=91.24%, sys=7.90%, ctx=5, majf=0, minf=9 00:21:15.828 IO depths : 1=0.1%, 2=9.7%, 4=62.0%, 8=28.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:15.828 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:15.828 complete : 0=0.0%, 4=96.2%, 8=3.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:15.828 issued rwts: total=10548,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:15.828 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:15.828 filename1: (groupid=0, jobs=1): err= 0: pid=83831: Mon Jul 15 08:34:07 2024 00:21:15.828 read: IOPS=1955, BW=15.3MiB/s (16.0MB/s)(76.4MiB/5001msec) 00:21:15.828 slat (nsec): min=5057, max=40958, avg=14546.08, stdev=3788.31 00:21:15.828 clat (usec): min=603, max=8435, avg=4041.52, stdev=867.71 00:21:15.828 lat (usec): min=614, max=8452, avg=4056.07, stdev=868.18 00:21:15.828 clat percentiles (usec): 00:21:15.828 | 1.00th=[ 2040], 5.00th=[ 2999], 10.00th=[ 3294], 20.00th=[ 3326], 00:21:15.828 | 30.00th=[ 3687], 40.00th=[ 3818], 50.00th=[ 3884], 60.00th=[ 3916], 00:21:15.828 | 70.00th=[ 4178], 80.00th=[ 4752], 90.00th=[ 5342], 95.00th=[ 5932], 00:21:15.828 | 99.00th=[ 6194], 99.50th=[ 6194], 99.90th=[ 6325], 99.95th=[ 6849], 00:21:15.828 | 99.99th=[ 8455] 00:21:15.828 bw ( KiB/s): min=11360, max=17984, per=23.46%, avg=15562.56, stdev=2199.21, samples=9 00:21:15.828 iops : min= 1420, max= 2248, avg=1945.22, stdev=274.87, samples=9 00:21:15.828 lat (usec) : 750=0.01% 00:21:15.828 lat (msec) : 2=0.95%, 4=62.38%, 10=36.66% 00:21:15.828 cpu : usr=91.04%, sys=8.00%, ctx=12, majf=0, minf=10 00:21:15.828 IO depths : 1=0.1%, 2=14.2%, 4=59.1%, 8=26.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:15.828 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:15.828 complete : 0=0.0%, 4=94.4%, 8=5.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:15.828 issued rwts: total=9780,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:15.828 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:15.828 00:21:15.828 Run status group 0 (all jobs): 00:21:15.828 READ: bw=64.8MiB/s (67.9MB/s), 15.3MiB/s-16.7MiB/s (16.0MB/s-17.5MB/s), io=324MiB (340MB), run=5001-5002msec 00:21:15.828 08:34:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:21:15.828 08:34:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:21:15.828 08:34:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:21:15.828 08:34:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:15.828 08:34:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:21:15.828 08:34:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:15.828 08:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.828 08:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:15.828 08:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.828 08:34:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:15.829 08:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.829 08:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:15.829 08:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.829 08:34:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:21:15.829 08:34:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:21:15.829 08:34:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:21:15.829 08:34:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:15.829 08:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.829 08:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:15.829 08:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.829 08:34:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:21:15.829 08:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.829 08:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:15.829 ************************************ 00:21:15.829 END TEST fio_dif_rand_params 00:21:15.829 ************************************ 00:21:15.829 08:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.829 00:21:15.829 real 0m23.468s 00:21:15.829 user 2m3.758s 00:21:15.829 sys 0m8.997s 00:21:15.829 08:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:15.829 08:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:15.829 08:34:07 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:21:15.829 08:34:07 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:21:15.829 08:34:07 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:15.829 08:34:07 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:15.829 08:34:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:15.829 ************************************ 00:21:15.829 START TEST fio_dif_digest 00:21:15.829 ************************************ 00:21:15.829 08:34:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:21:15.829 08:34:07 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:21:15.829 08:34:07 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:21:15.829 08:34:07 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:21:15.829 08:34:07 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:21:15.829 08:34:07 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:21:15.829 08:34:07 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:21:15.829 08:34:07 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:21:15.829 08:34:07 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:21:15.829 08:34:07 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:21:15.829 08:34:07 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:21:15.829 08:34:07 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:21:15.829 08:34:07 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:21:15.829 08:34:07 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:21:15.829 08:34:07 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:21:15.829 08:34:07 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:21:15.829 08:34:07 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:21:15.829 08:34:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.829 08:34:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:15.829 bdev_null0 00:21:15.829 08:34:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.829 08:34:07 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:15.829 08:34:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.829 08:34:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:15.829 08:34:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.829 08:34:07 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:15.829 08:34:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.829 08:34:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:15.829 08:34:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.829 08:34:07 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:15.829 08:34:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.829 08:34:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:15.829 [2024-07-15 08:34:07.644442] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:15.829 08:34:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.829 08:34:07 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:21:15.829 08:34:07 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:21:15.829 08:34:07 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:21:15.829 08:34:07 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:21:15.829 08:34:07 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:21:15.829 08:34:07 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:15.829 08:34:07 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:15.829 { 00:21:15.829 "params": { 00:21:15.829 "name": "Nvme$subsystem", 00:21:15.829 "trtype": "$TEST_TRANSPORT", 00:21:15.829 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:15.829 "adrfam": "ipv4", 00:21:15.829 "trsvcid": "$NVMF_PORT", 00:21:15.829 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:15.829 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:15.829 "hdgst": ${hdgst:-false}, 00:21:15.829 "ddgst": ${ddgst:-false} 00:21:15.829 }, 00:21:15.829 "method": "bdev_nvme_attach_controller" 00:21:15.829 } 00:21:15.829 EOF 00:21:15.829 )") 00:21:15.829 08:34:07 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:15.829 08:34:07 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:21:15.829 08:34:07 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:21:15.829 08:34:07 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:21:15.829 08:34:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:15.829 08:34:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:15.829 08:34:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:15.829 08:34:07 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:21:15.829 08:34:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:15.829 08:34:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:15.829 08:34:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:21:15.829 08:34:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:15.829 08:34:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:15.829 08:34:07 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:21:15.829 08:34:07 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:21:15.829 08:34:07 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:21:15.829 08:34:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:21:15.829 08:34:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:15.829 08:34:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:15.829 08:34:07 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:21:15.829 08:34:07 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:15.829 "params": { 00:21:15.829 "name": "Nvme0", 00:21:15.829 "trtype": "tcp", 00:21:15.829 "traddr": "10.0.0.2", 00:21:15.829 "adrfam": "ipv4", 00:21:15.829 "trsvcid": "4420", 00:21:15.829 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:15.829 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:15.829 "hdgst": true, 00:21:15.829 "ddgst": true 00:21:15.829 }, 00:21:15.829 "method": "bdev_nvme_attach_controller" 00:21:15.829 }' 00:21:15.829 08:34:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:15.829 08:34:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:15.829 08:34:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:15.829 08:34:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:15.829 08:34:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:21:15.829 08:34:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:15.829 08:34:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:15.829 08:34:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:15.829 08:34:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:15.829 08:34:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:15.829 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:21:15.829 ... 00:21:15.829 fio-3.35 00:21:15.829 Starting 3 threads 00:21:28.018 00:21:28.018 filename0: (groupid=0, jobs=1): err= 0: pid=83938: Mon Jul 15 08:34:18 2024 00:21:28.018 read: IOPS=226, BW=28.3MiB/s (29.7MB/s)(283MiB/10001msec) 00:21:28.018 slat (nsec): min=7110, max=45130, avg=16114.47, stdev=5279.33 00:21:28.018 clat (usec): min=13018, max=17317, avg=13210.60, stdev=206.26 00:21:28.018 lat (usec): min=13030, max=17345, avg=13226.72, stdev=206.73 00:21:28.018 clat percentiles (usec): 00:21:28.018 | 1.00th=[13042], 5.00th=[13042], 10.00th=[13173], 20.00th=[13173], 00:21:28.018 | 30.00th=[13173], 40.00th=[13173], 50.00th=[13173], 60.00th=[13173], 00:21:28.018 | 70.00th=[13173], 80.00th=[13304], 90.00th=[13304], 95.00th=[13435], 00:21:28.018 | 99.00th=[13829], 99.50th=[14091], 99.90th=[17171], 99.95th=[17433], 00:21:28.018 | 99.99th=[17433] 00:21:28.018 bw ( KiB/s): min=28416, max=29184, per=33.37%, avg=29022.32, stdev=321.68, samples=19 00:21:28.018 iops : min= 222, max= 228, avg=226.74, stdev= 2.51, samples=19 00:21:28.018 lat (msec) : 20=100.00% 00:21:28.018 cpu : usr=91.45%, sys=8.01%, ctx=13, majf=0, minf=0 00:21:28.018 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:28.018 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:28.018 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:28.018 issued rwts: total=2265,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:28.018 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:28.018 filename0: (groupid=0, jobs=1): err= 0: pid=83939: Mon Jul 15 08:34:18 2024 00:21:28.018 read: IOPS=226, BW=28.3MiB/s (29.7MB/s)(284MiB/10008msec) 00:21:28.018 slat (nsec): min=7803, max=43982, avg=17185.14, stdev=4823.78 00:21:28.018 clat (usec): min=9416, max=15418, avg=13199.64, stdev=203.92 00:21:28.018 lat (usec): min=9431, max=15449, avg=13216.83, stdev=203.99 00:21:28.018 clat percentiles (usec): 00:21:28.018 | 1.00th=[13042], 5.00th=[13173], 10.00th=[13173], 20.00th=[13173], 00:21:28.018 | 30.00th=[13173], 40.00th=[13173], 50.00th=[13173], 60.00th=[13173], 00:21:28.018 | 70.00th=[13173], 80.00th=[13173], 90.00th=[13304], 95.00th=[13435], 00:21:28.018 | 99.00th=[13829], 99.50th=[13960], 99.90th=[15401], 99.95th=[15401], 00:21:28.018 | 99.99th=[15401] 00:21:28.018 bw ( KiB/s): min=28416, max=29184, per=33.38%, avg=29025.26, stdev=316.02, samples=19 00:21:28.018 iops : min= 222, max= 228, avg=226.74, stdev= 2.51, samples=19 00:21:28.018 lat (msec) : 10=0.13%, 20=99.87% 00:21:28.018 cpu : usr=92.03%, sys=7.46%, ctx=18, majf=0, minf=9 00:21:28.018 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:28.018 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:28.018 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:28.018 issued rwts: total=2268,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:28.018 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:28.018 filename0: (groupid=0, jobs=1): err= 0: pid=83940: Mon Jul 15 08:34:18 2024 00:21:28.018 read: IOPS=226, BW=28.3MiB/s (29.7MB/s)(284MiB/10010msec) 00:21:28.018 slat (nsec): min=7878, max=44583, avg=17172.93, stdev=5047.09 00:21:28.018 clat (usec): min=9408, max=15985, avg=13200.51, stdev=221.50 00:21:28.018 lat (usec): min=9423, max=16006, avg=13217.69, stdev=221.61 00:21:28.018 clat percentiles (usec): 00:21:28.018 | 1.00th=[13042], 5.00th=[13042], 10.00th=[13173], 20.00th=[13173], 00:21:28.018 | 30.00th=[13173], 40.00th=[13173], 50.00th=[13173], 60.00th=[13173], 00:21:28.018 | 70.00th=[13173], 80.00th=[13173], 90.00th=[13304], 95.00th=[13435], 00:21:28.018 | 99.00th=[13829], 99.50th=[14091], 99.90th=[15926], 99.95th=[15926], 00:21:28.018 | 99.99th=[15926] 00:21:28.018 bw ( KiB/s): min=28359, max=29184, per=33.37%, avg=29019.32, stdev=327.85, samples=19 00:21:28.018 iops : min= 221, max= 228, avg=226.68, stdev= 2.63, samples=19 00:21:28.018 lat (msec) : 10=0.13%, 20=99.87% 00:21:28.018 cpu : usr=91.54%, sys=7.84%, ctx=64, majf=0, minf=0 00:21:28.018 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:28.018 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:28.018 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:28.018 issued rwts: total=2268,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:28.018 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:28.018 00:21:28.018 Run status group 0 (all jobs): 00:21:28.018 READ: bw=84.9MiB/s (89.1MB/s), 28.3MiB/s-28.3MiB/s (29.7MB/s-29.7MB/s), io=850MiB (891MB), run=10001-10010msec 00:21:28.018 08:34:18 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:21:28.018 08:34:18 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:21:28.018 08:34:18 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:21:28.018 08:34:18 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:28.018 08:34:18 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:21:28.018 08:34:18 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:28.018 08:34:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.018 08:34:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:28.018 08:34:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.018 08:34:18 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:28.018 08:34:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.018 08:34:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:28.018 ************************************ 00:21:28.018 END TEST fio_dif_digest 00:21:28.018 ************************************ 00:21:28.018 08:34:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.018 00:21:28.018 real 0m11.017s 00:21:28.018 user 0m28.161s 00:21:28.018 sys 0m2.610s 00:21:28.018 08:34:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:28.018 08:34:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:28.018 08:34:18 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:21:28.018 08:34:18 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:21:28.018 08:34:18 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:21:28.018 08:34:18 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:28.018 08:34:18 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:21:28.018 08:34:18 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:28.018 08:34:18 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:21:28.018 08:34:18 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:28.018 08:34:18 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:28.018 rmmod nvme_tcp 00:21:28.018 rmmod nvme_fabrics 00:21:28.018 rmmod nvme_keyring 00:21:28.018 08:34:18 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:28.018 08:34:18 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:21:28.018 08:34:18 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:21:28.018 08:34:18 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 83190 ']' 00:21:28.018 08:34:18 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 83190 00:21:28.018 08:34:18 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 83190 ']' 00:21:28.018 08:34:18 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 83190 00:21:28.018 08:34:18 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:21:28.018 08:34:18 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:28.019 08:34:18 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83190 00:21:28.019 killing process with pid 83190 00:21:28.019 08:34:18 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:28.019 08:34:18 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:28.019 08:34:18 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83190' 00:21:28.019 08:34:18 nvmf_dif -- common/autotest_common.sh@967 -- # kill 83190 00:21:28.019 08:34:18 nvmf_dif -- common/autotest_common.sh@972 -- # wait 83190 00:21:28.019 08:34:19 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:21:28.019 08:34:19 nvmf_dif -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:28.019 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:28.019 Waiting for block devices as requested 00:21:28.019 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:28.019 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:28.019 08:34:19 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:28.019 08:34:19 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:28.019 08:34:19 nvmf_dif -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:28.019 08:34:19 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:28.019 08:34:19 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:28.019 08:34:19 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:28.019 08:34:19 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:28.019 08:34:19 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:28.019 ************************************ 00:21:28.019 END TEST nvmf_dif 00:21:28.019 ************************************ 00:21:28.019 00:21:28.019 real 0m59.824s 00:21:28.019 user 3m47.329s 00:21:28.019 sys 0m20.296s 00:21:28.019 08:34:19 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:28.019 08:34:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:28.019 08:34:19 -- common/autotest_common.sh@1142 -- # return 0 00:21:28.019 08:34:19 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:21:28.019 08:34:19 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:28.019 08:34:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:28.019 08:34:19 -- common/autotest_common.sh@10 -- # set +x 00:21:28.019 ************************************ 00:21:28.019 START TEST nvmf_abort_qd_sizes 00:21:28.019 ************************************ 00:21:28.019 08:34:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:21:28.019 * Looking for test storage... 00:21:28.019 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:28.019 08:34:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:28.019 08:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:21:28.019 08:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:28.019 08:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:28.019 08:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:28.019 08:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:28.019 08:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:28.019 08:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:28.019 08:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:28.019 08:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:28.019 08:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:28.019 08:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:28.019 08:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:21:28.019 08:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:21:28.019 08:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:28.019 08:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:28.019 08:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:28.019 08:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:28.019 08:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:28.019 08:34:19 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:28.019 08:34:19 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:28.019 08:34:19 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:28.019 08:34:19 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.019 08:34:19 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.019 08:34:19 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.019 08:34:19 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:21:28.019 08:34:19 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.019 08:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:21:28.019 08:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:28.019 08:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:28.019 08:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:28.019 08:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:28.019 08:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:28.019 08:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:28.019 08:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:28.019 08:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:28.019 08:34:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:21:28.019 08:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:28.019 08:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:28.019 08:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:28.019 08:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:28.019 08:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:28.019 08:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:28.019 08:34:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:28.019 08:34:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:28.019 08:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:21:28.019 08:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:21:28.019 08:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:21:28.019 08:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:21:28.019 08:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:21:28.019 08:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # nvmf_veth_init 00:21:28.019 08:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:28.019 08:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:28.019 08:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:28.019 08:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:28.019 08:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:28.019 08:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:28.019 08:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:28.019 08:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:28.019 08:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:28.019 08:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:28.019 08:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:28.019 08:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:28.019 08:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:28.019 08:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:28.019 Cannot find device "nvmf_tgt_br" 00:21:28.019 08:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # true 00:21:28.019 08:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:28.019 Cannot find device "nvmf_tgt_br2" 00:21:28.019 08:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # true 00:21:28.019 08:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:28.019 08:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:28.019 Cannot find device "nvmf_tgt_br" 00:21:28.019 08:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # true 00:21:28.019 08:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:28.019 Cannot find device "nvmf_tgt_br2" 00:21:28.019 08:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # true 00:21:28.019 08:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:28.019 08:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:28.019 08:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:28.019 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:28.019 08:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:21:28.019 08:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:28.019 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:28.019 08:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:21:28.019 08:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:28.019 08:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:28.019 08:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:28.019 08:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:28.019 08:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:28.020 08:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:28.020 08:34:20 nvmf_abort_qd_sizes -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:28.020 08:34:20 nvmf_abort_qd_sizes -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:28.020 08:34:20 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:28.020 08:34:20 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:28.020 08:34:20 nvmf_abort_qd_sizes -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:28.020 08:34:20 nvmf_abort_qd_sizes -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:28.020 08:34:20 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:28.020 08:34:20 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:28.020 08:34:20 nvmf_abort_qd_sizes -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:28.020 08:34:20 nvmf_abort_qd_sizes -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:28.020 08:34:20 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:28.020 08:34:20 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:28.020 08:34:20 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:28.020 08:34:20 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:28.020 08:34:20 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:28.020 08:34:20 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:28.020 08:34:20 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:28.020 08:34:20 nvmf_abort_qd_sizes -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:28.020 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:28.020 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:21:28.020 00:21:28.020 --- 10.0.0.2 ping statistics --- 00:21:28.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:28.020 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:21:28.020 08:34:20 nvmf_abort_qd_sizes -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:28.020 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:28.020 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:21:28.020 00:21:28.020 --- 10.0.0.3 ping statistics --- 00:21:28.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:28.020 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:21:28.020 08:34:20 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:28.020 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:28.020 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:21:28.020 00:21:28.020 --- 10.0.0.1 ping statistics --- 00:21:28.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:28.020 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:21:28.020 08:34:20 nvmf_abort_qd_sizes -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:28.020 08:34:20 nvmf_abort_qd_sizes -- nvmf/common.sh@433 -- # return 0 00:21:28.020 08:34:20 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:21:28.020 08:34:20 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:28.952 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:28.953 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:28.953 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:28.953 08:34:21 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:28.953 08:34:21 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:28.953 08:34:21 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:28.953 08:34:21 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:28.953 08:34:21 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:28.953 08:34:21 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:28.953 08:34:21 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:21:28.953 08:34:21 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:28.953 08:34:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:28.953 08:34:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:28.953 08:34:21 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=84535 00:21:28.953 08:34:21 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 84535 00:21:28.953 08:34:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 84535 ']' 00:21:28.953 08:34:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:28.953 08:34:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:28.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:28.953 08:34:21 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:21:28.953 08:34:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:28.953 08:34:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:28.953 08:34:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:28.953 [2024-07-15 08:34:21.106707] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:21:28.953 [2024-07-15 08:34:21.106826] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:29.211 [2024-07-15 08:34:21.247680] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:29.211 [2024-07-15 08:34:21.371676] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:29.211 [2024-07-15 08:34:21.371736] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:29.211 [2024-07-15 08:34:21.371761] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:29.211 [2024-07-15 08:34:21.371771] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:29.211 [2024-07-15 08:34:21.371781] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:29.211 [2024-07-15 08:34:21.371882] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:29.211 [2024-07-15 08:34:21.372380] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:29.211 [2024-07-15 08:34:21.372592] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:29.211 [2024-07-15 08:34:21.372638] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:29.507 [2024-07-15 08:34:21.428896] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:30.075 08:34:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:30.075 08:34:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:21:30.075 08:34:22 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:30.075 08:34:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:30.075 08:34:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:30.075 08:34:22 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:30.075 08:34:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:21:30.075 08:34:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:21:30.075 08:34:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:21:30.075 08:34:22 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:21:30.075 08:34:22 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:21:30.075 08:34:22 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n '' ]] 00:21:30.075 08:34:22 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:21:30.075 08:34:22 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:21:30.075 08:34:22 nvmf_abort_qd_sizes -- scripts/common.sh@295 -- # local bdf= 00:21:30.075 08:34:22 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:21:30.075 08:34:22 nvmf_abort_qd_sizes -- scripts/common.sh@230 -- # local class 00:21:30.075 08:34:22 nvmf_abort_qd_sizes -- scripts/common.sh@231 -- # local subclass 00:21:30.075 08:34:22 nvmf_abort_qd_sizes -- scripts/common.sh@232 -- # local progif 00:21:30.075 08:34:22 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # printf %02x 1 00:21:30.075 08:34:22 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # class=01 00:21:30.075 08:34:22 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # printf %02x 8 00:21:30.075 08:34:22 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # subclass=08 00:21:30.075 08:34:22 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # printf %02x 2 00:21:30.075 08:34:22 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # progif=02 00:21:30.075 08:34:22 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # hash lspci 00:21:30.075 08:34:22 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:21:30.075 08:34:22 nvmf_abort_qd_sizes -- scripts/common.sh@239 -- # lspci -mm -n -D 00:21:30.075 08:34:22 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # grep -i -- -p02 00:21:30.075 08:34:22 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:21:30.075 08:34:22 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # tr -d '"' 00:21:30.075 08:34:22 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:21:30.075 08:34:22 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:21:30.075 08:34:22 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:21:30.075 08:34:22 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:21:30.075 08:34:22 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:21:30.075 08:34:22 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:21:30.075 08:34:22 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:21:30.075 08:34:22 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:21:30.075 08:34:22 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:21:30.075 08:34:22 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:21:30.075 08:34:22 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:21:30.075 08:34:22 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:21:30.075 08:34:22 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:21:30.075 08:34:22 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:21:30.075 08:34:22 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:21:30.075 08:34:22 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:21:30.075 08:34:22 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:21:30.075 08:34:22 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:21:30.075 08:34:22 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:21:30.075 08:34:22 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:21:30.075 08:34:22 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:21:30.075 08:34:22 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:21:30.075 08:34:22 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:21:30.075 08:34:22 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:21:30.075 08:34:22 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 2 )) 00:21:30.075 08:34:22 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:21:30.075 08:34:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:21:30.075 08:34:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:21:30.075 08:34:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:21:30.075 08:34:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:30.075 08:34:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:30.075 08:34:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:30.075 ************************************ 00:21:30.075 START TEST spdk_target_abort 00:21:30.075 ************************************ 00:21:30.075 08:34:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:21:30.075 08:34:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:21:30.075 08:34:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:21:30.075 08:34:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.075 08:34:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:30.333 spdk_targetn1 00:21:30.333 08:34:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.333 08:34:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:30.333 08:34:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.333 08:34:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:30.333 [2024-07-15 08:34:22.318130] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:30.333 08:34:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.333 08:34:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:21:30.333 08:34:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.333 08:34:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:30.333 08:34:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.333 08:34:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:21:30.333 08:34:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.333 08:34:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:30.333 08:34:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.333 08:34:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:21:30.333 08:34:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.334 08:34:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:30.334 [2024-07-15 08:34:22.346269] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:30.334 08:34:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.334 08:34:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:21:30.334 08:34:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:21:30.334 08:34:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:21:30.334 08:34:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:21:30.334 08:34:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:21:30.334 08:34:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:21:30.334 08:34:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:21:30.334 08:34:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:21:30.334 08:34:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:21:30.334 08:34:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:30.334 08:34:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:21:30.334 08:34:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:30.334 08:34:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:21:30.334 08:34:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:30.334 08:34:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:21:30.334 08:34:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:30.334 08:34:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:30.334 08:34:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:30.334 08:34:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:30.334 08:34:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:30.334 08:34:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:33.614 Initializing NVMe Controllers 00:21:33.614 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:21:33.614 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:33.614 Initialization complete. Launching workers. 00:21:33.614 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11589, failed: 0 00:21:33.614 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1039, failed to submit 10550 00:21:33.614 success 808, unsuccess 231, failed 0 00:21:33.614 08:34:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:33.614 08:34:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:36.917 Initializing NVMe Controllers 00:21:36.917 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:21:36.917 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:36.917 Initialization complete. Launching workers. 00:21:36.917 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8967, failed: 0 00:21:36.917 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1164, failed to submit 7803 00:21:36.917 success 369, unsuccess 795, failed 0 00:21:36.917 08:34:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:36.917 08:34:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:40.194 Initializing NVMe Controllers 00:21:40.194 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:21:40.194 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:40.194 Initialization complete. Launching workers. 00:21:40.194 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 27974, failed: 0 00:21:40.194 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2206, failed to submit 25768 00:21:40.194 success 340, unsuccess 1866, failed 0 00:21:40.194 08:34:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:21:40.194 08:34:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.194 08:34:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:40.194 08:34:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.194 08:34:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:21:40.194 08:34:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.194 08:34:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:40.761 08:34:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.761 08:34:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 84535 00:21:40.761 08:34:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 84535 ']' 00:21:40.761 08:34:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 84535 00:21:40.761 08:34:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:21:40.761 08:34:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:40.761 08:34:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84535 00:21:40.761 killing process with pid 84535 00:21:40.761 08:34:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:40.761 08:34:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:40.761 08:34:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84535' 00:21:40.761 08:34:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 84535 00:21:40.761 08:34:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 84535 00:21:40.761 ************************************ 00:21:40.761 END TEST spdk_target_abort 00:21:40.761 ************************************ 00:21:40.761 00:21:40.761 real 0m10.651s 00:21:40.761 user 0m43.341s 00:21:40.761 sys 0m2.199s 00:21:40.761 08:34:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:40.761 08:34:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:40.761 08:34:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:21:40.761 08:34:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:21:40.761 08:34:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:40.761 08:34:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:40.761 08:34:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:41.020 ************************************ 00:21:41.020 START TEST kernel_target_abort 00:21:41.020 ************************************ 00:21:41.020 08:34:32 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:21:41.020 08:34:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:21:41.020 08:34:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:21:41.020 08:34:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:21:41.020 08:34:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:21:41.020 08:34:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:41.020 08:34:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:41.020 08:34:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:21:41.020 08:34:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:41.020 08:34:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:21:41.020 08:34:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:21:41.020 08:34:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:21:41.020 08:34:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:21:41.020 08:34:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:21:41.020 08:34:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:21:41.020 08:34:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:41.020 08:34:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:41.020 08:34:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:21:41.020 08:34:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:21:41.020 08:34:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:21:41.020 08:34:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:21:41.020 08:34:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:21:41.020 08:34:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:41.279 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:41.279 Waiting for block devices as requested 00:21:41.279 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:41.545 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:41.545 08:34:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:21:41.545 08:34:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:21:41.545 08:34:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:21:41.545 08:34:33 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:21:41.545 08:34:33 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:21:41.545 08:34:33 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:21:41.545 08:34:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:21:41.545 08:34:33 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:21:41.545 08:34:33 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:21:41.545 No valid GPT data, bailing 00:21:41.545 08:34:33 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:21:41.545 08:34:33 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:21:41.545 08:34:33 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:21:41.545 08:34:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:21:41.545 08:34:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:21:41.545 08:34:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:21:41.545 08:34:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:21:41.545 08:34:33 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:21:41.545 08:34:33 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:21:41.545 08:34:33 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:21:41.545 08:34:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:21:41.545 08:34:33 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:21:41.545 08:34:33 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:21:41.545 No valid GPT data, bailing 00:21:41.545 08:34:33 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:21:41.545 08:34:33 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:21:41.545 08:34:33 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:21:41.545 08:34:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:21:41.545 08:34:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:21:41.545 08:34:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:21:41.545 08:34:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:21:41.546 08:34:33 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:21:41.546 08:34:33 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:21:41.546 08:34:33 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:21:41.546 08:34:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:21:41.546 08:34:33 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:21:41.546 08:34:33 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:21:41.804 No valid GPT data, bailing 00:21:41.804 08:34:33 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:21:41.804 08:34:33 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:21:41.804 08:34:33 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:21:41.804 08:34:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:21:41.804 08:34:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:21:41.804 08:34:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:21:41.804 08:34:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:21:41.804 08:34:33 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:21:41.804 08:34:33 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:21:41.804 08:34:33 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:21:41.804 08:34:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:21:41.804 08:34:33 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:21:41.804 08:34:33 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:21:41.804 No valid GPT data, bailing 00:21:41.804 08:34:33 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:21:41.804 08:34:33 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:21:41.804 08:34:33 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:21:41.804 08:34:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:21:41.804 08:34:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:21:41.804 08:34:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:41.804 08:34:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:41.804 08:34:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:21:41.804 08:34:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:21:41.804 08:34:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:21:41.804 08:34:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:21:41.804 08:34:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:21:41.804 08:34:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:21:41.804 08:34:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:21:41.804 08:34:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:21:41.804 08:34:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:21:41.804 08:34:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:21:41.804 08:34:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 --hostid=cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 -a 10.0.0.1 -t tcp -s 4420 00:21:41.804 00:21:41.804 Discovery Log Number of Records 2, Generation counter 2 00:21:41.804 =====Discovery Log Entry 0====== 00:21:41.804 trtype: tcp 00:21:41.804 adrfam: ipv4 00:21:41.804 subtype: current discovery subsystem 00:21:41.804 treq: not specified, sq flow control disable supported 00:21:41.804 portid: 1 00:21:41.804 trsvcid: 4420 00:21:41.804 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:21:41.804 traddr: 10.0.0.1 00:21:41.804 eflags: none 00:21:41.804 sectype: none 00:21:41.804 =====Discovery Log Entry 1====== 00:21:41.804 trtype: tcp 00:21:41.804 adrfam: ipv4 00:21:41.804 subtype: nvme subsystem 00:21:41.804 treq: not specified, sq flow control disable supported 00:21:41.804 portid: 1 00:21:41.804 trsvcid: 4420 00:21:41.804 subnqn: nqn.2016-06.io.spdk:testnqn 00:21:41.804 traddr: 10.0.0.1 00:21:41.804 eflags: none 00:21:41.804 sectype: none 00:21:41.804 08:34:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:21:41.804 08:34:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:21:41.804 08:34:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:21:41.804 08:34:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:21:41.804 08:34:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:21:41.804 08:34:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:21:41.804 08:34:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:21:41.804 08:34:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:21:41.804 08:34:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:21:41.804 08:34:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:41.804 08:34:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:21:41.804 08:34:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:41.804 08:34:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:21:41.804 08:34:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:41.804 08:34:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:21:41.804 08:34:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:41.804 08:34:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:21:41.804 08:34:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:41.804 08:34:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:41.804 08:34:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:41.804 08:34:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:45.091 Initializing NVMe Controllers 00:21:45.092 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:45.092 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:45.092 Initialization complete. Launching workers. 00:21:45.092 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 32648, failed: 0 00:21:45.092 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 32648, failed to submit 0 00:21:45.092 success 0, unsuccess 32648, failed 0 00:21:45.092 08:34:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:45.092 08:34:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:48.405 Initializing NVMe Controllers 00:21:48.405 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:48.405 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:48.405 Initialization complete. Launching workers. 00:21:48.405 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 69433, failed: 0 00:21:48.405 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 29799, failed to submit 39634 00:21:48.405 success 0, unsuccess 29799, failed 0 00:21:48.405 08:34:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:48.405 08:34:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:51.691 Initializing NVMe Controllers 00:21:51.691 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:51.691 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:51.691 Initialization complete. Launching workers. 00:21:51.691 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 81371, failed: 0 00:21:51.691 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 20302, failed to submit 61069 00:21:51.691 success 0, unsuccess 20302, failed 0 00:21:51.691 08:34:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:21:51.691 08:34:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:21:51.691 08:34:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:21:51.691 08:34:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:51.691 08:34:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:51.691 08:34:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:21:51.691 08:34:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:51.691 08:34:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:21:51.691 08:34:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:21:51.691 08:34:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:52.258 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:54.793 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:54.793 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:54.793 ************************************ 00:21:54.793 END TEST kernel_target_abort 00:21:54.793 ************************************ 00:21:54.793 00:21:54.793 real 0m13.520s 00:21:54.793 user 0m6.138s 00:21:54.793 sys 0m4.695s 00:21:54.793 08:34:46 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:54.793 08:34:46 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:54.793 08:34:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:21:54.793 08:34:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:21:54.793 08:34:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:21:54.793 08:34:46 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:54.793 08:34:46 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:21:54.793 08:34:46 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:54.793 08:34:46 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:21:54.793 08:34:46 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:54.793 08:34:46 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:54.793 rmmod nvme_tcp 00:21:54.793 rmmod nvme_fabrics 00:21:54.793 rmmod nvme_keyring 00:21:54.793 08:34:46 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:54.793 Process with pid 84535 is not found 00:21:54.793 08:34:46 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:21:54.793 08:34:46 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:21:54.793 08:34:46 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 84535 ']' 00:21:54.793 08:34:46 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 84535 00:21:54.793 08:34:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 84535 ']' 00:21:54.793 08:34:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 84535 00:21:54.793 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (84535) - No such process 00:21:54.793 08:34:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 84535 is not found' 00:21:54.793 08:34:46 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:21:54.793 08:34:46 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:54.793 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:54.793 Waiting for block devices as requested 00:21:55.051 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:55.051 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:55.051 08:34:47 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:55.051 08:34:47 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:55.051 08:34:47 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:55.051 08:34:47 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:55.051 08:34:47 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:55.051 08:34:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:55.051 08:34:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:55.051 08:34:47 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:55.051 00:21:55.052 real 0m27.493s 00:21:55.052 user 0m50.700s 00:21:55.052 sys 0m8.264s 00:21:55.052 ************************************ 00:21:55.052 END TEST nvmf_abort_qd_sizes 00:21:55.052 ************************************ 00:21:55.052 08:34:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:55.052 08:34:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:55.310 08:34:47 -- common/autotest_common.sh@1142 -- # return 0 00:21:55.310 08:34:47 -- spdk/autotest.sh@295 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:21:55.310 08:34:47 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:55.310 08:34:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:55.310 08:34:47 -- common/autotest_common.sh@10 -- # set +x 00:21:55.310 ************************************ 00:21:55.310 START TEST keyring_file 00:21:55.310 ************************************ 00:21:55.310 08:34:47 keyring_file -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:21:55.310 * Looking for test storage... 00:21:55.310 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:21:55.311 08:34:47 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:21:55.311 08:34:47 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:55.311 08:34:47 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:21:55.311 08:34:47 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:55.311 08:34:47 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:55.311 08:34:47 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:55.311 08:34:47 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:55.311 08:34:47 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:55.311 08:34:47 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:55.311 08:34:47 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:55.311 08:34:47 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:55.311 08:34:47 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:55.311 08:34:47 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:55.311 08:34:47 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:21:55.311 08:34:47 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:21:55.311 08:34:47 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:55.311 08:34:47 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:55.311 08:34:47 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:55.311 08:34:47 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:55.311 08:34:47 keyring_file -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:55.311 08:34:47 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:55.311 08:34:47 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:55.311 08:34:47 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:55.311 08:34:47 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.311 08:34:47 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.311 08:34:47 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.311 08:34:47 keyring_file -- paths/export.sh@5 -- # export PATH 00:21:55.311 08:34:47 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.311 08:34:47 keyring_file -- nvmf/common.sh@47 -- # : 0 00:21:55.311 08:34:47 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:55.311 08:34:47 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:55.311 08:34:47 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:55.311 08:34:47 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:55.311 08:34:47 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:55.311 08:34:47 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:55.311 08:34:47 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:55.311 08:34:47 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:55.311 08:34:47 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:21:55.311 08:34:47 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:21:55.311 08:34:47 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:21:55.311 08:34:47 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:21:55.311 08:34:47 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:21:55.311 08:34:47 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:21:55.311 08:34:47 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:21:55.311 08:34:47 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:21:55.311 08:34:47 keyring_file -- keyring/common.sh@17 -- # name=key0 00:21:55.311 08:34:47 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:21:55.311 08:34:47 keyring_file -- keyring/common.sh@17 -- # digest=0 00:21:55.311 08:34:47 keyring_file -- keyring/common.sh@18 -- # mktemp 00:21:55.311 08:34:47 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.aNqU7QcJAC 00:21:55.311 08:34:47 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:21:55.311 08:34:47 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:21:55.311 08:34:47 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:21:55.311 08:34:47 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:55.311 08:34:47 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:21:55.311 08:34:47 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:21:55.311 08:34:47 keyring_file -- nvmf/common.sh@705 -- # python - 00:21:55.311 08:34:47 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.aNqU7QcJAC 00:21:55.311 08:34:47 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.aNqU7QcJAC 00:21:55.311 08:34:47 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.aNqU7QcJAC 00:21:55.311 08:34:47 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:21:55.311 08:34:47 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:21:55.311 08:34:47 keyring_file -- keyring/common.sh@17 -- # name=key1 00:21:55.311 08:34:47 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:21:55.311 08:34:47 keyring_file -- keyring/common.sh@17 -- # digest=0 00:21:55.311 08:34:47 keyring_file -- keyring/common.sh@18 -- # mktemp 00:21:55.311 08:34:47 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.NwHPuJZdzd 00:21:55.311 08:34:47 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:21:55.311 08:34:47 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:21:55.311 08:34:47 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:21:55.311 08:34:47 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:55.311 08:34:47 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:21:55.311 08:34:47 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:21:55.311 08:34:47 keyring_file -- nvmf/common.sh@705 -- # python - 00:21:55.311 08:34:47 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.NwHPuJZdzd 00:21:55.311 08:34:47 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.NwHPuJZdzd 00:21:55.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:55.311 08:34:47 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.NwHPuJZdzd 00:21:55.311 08:34:47 keyring_file -- keyring/file.sh@30 -- # tgtpid=85399 00:21:55.311 08:34:47 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:55.311 08:34:47 keyring_file -- keyring/file.sh@32 -- # waitforlisten 85399 00:21:55.311 08:34:47 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 85399 ']' 00:21:55.311 08:34:47 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:55.311 08:34:47 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:55.311 08:34:47 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:55.311 08:34:47 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:55.311 08:34:47 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:55.570 [2024-07-15 08:34:47.530034] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:21:55.570 [2024-07-15 08:34:47.530380] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85399 ] 00:21:55.570 [2024-07-15 08:34:47.669165] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:55.830 [2024-07-15 08:34:47.826027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:55.830 [2024-07-15 08:34:47.888912] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:56.414 08:34:48 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:56.414 08:34:48 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:21:56.414 08:34:48 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:21:56.414 08:34:48 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.414 08:34:48 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:56.414 [2024-07-15 08:34:48.565440] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:56.414 null0 00:21:56.673 [2024-07-15 08:34:48.597385] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:56.673 [2024-07-15 08:34:48.597650] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:21:56.673 [2024-07-15 08:34:48.605383] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:56.673 08:34:48 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.673 08:34:48 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:21:56.673 08:34:48 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:21:56.673 08:34:48 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:21:56.673 08:34:48 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:56.673 08:34:48 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:56.673 08:34:48 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:56.673 08:34:48 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:56.673 08:34:48 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:21:56.673 08:34:48 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.673 08:34:48 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:56.673 [2024-07-15 08:34:48.617396] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:21:56.673 request: 00:21:56.673 { 00:21:56.673 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:21:56.673 "secure_channel": false, 00:21:56.673 "listen_address": { 00:21:56.673 "trtype": "tcp", 00:21:56.673 "traddr": "127.0.0.1", 00:21:56.673 "trsvcid": "4420" 00:21:56.673 }, 00:21:56.673 "method": "nvmf_subsystem_add_listener", 00:21:56.673 "req_id": 1 00:21:56.673 } 00:21:56.673 Got JSON-RPC error response 00:21:56.673 response: 00:21:56.673 { 00:21:56.673 "code": -32602, 00:21:56.673 "message": "Invalid parameters" 00:21:56.673 } 00:21:56.673 08:34:48 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:56.673 08:34:48 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:21:56.673 08:34:48 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:56.673 08:34:48 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:56.673 08:34:48 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:56.673 08:34:48 keyring_file -- keyring/file.sh@46 -- # bperfpid=85416 00:21:56.673 08:34:48 keyring_file -- keyring/file.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:21:56.673 08:34:48 keyring_file -- keyring/file.sh@48 -- # waitforlisten 85416 /var/tmp/bperf.sock 00:21:56.673 08:34:48 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 85416 ']' 00:21:56.673 08:34:48 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:56.673 08:34:48 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:56.673 08:34:48 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:56.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:56.673 08:34:48 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:56.673 08:34:48 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:56.673 [2024-07-15 08:34:48.683862] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:21:56.673 [2024-07-15 08:34:48.684207] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85416 ] 00:21:56.673 [2024-07-15 08:34:48.823099] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:56.931 [2024-07-15 08:34:48.932986] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:56.932 [2024-07-15 08:34:48.988380] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:57.505 08:34:49 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:57.506 08:34:49 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:21:57.506 08:34:49 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.aNqU7QcJAC 00:21:57.506 08:34:49 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.aNqU7QcJAC 00:21:57.768 08:34:49 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.NwHPuJZdzd 00:21:57.768 08:34:49 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.NwHPuJZdzd 00:21:58.027 08:34:50 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:21:58.027 08:34:50 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:21:58.027 08:34:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:58.027 08:34:50 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:58.027 08:34:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:58.339 08:34:50 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.aNqU7QcJAC == \/\t\m\p\/\t\m\p\.\a\N\q\U\7\Q\c\J\A\C ]] 00:21:58.339 08:34:50 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:21:58.339 08:34:50 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:21:58.339 08:34:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:58.339 08:34:50 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:58.339 08:34:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:58.598 08:34:50 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.NwHPuJZdzd == \/\t\m\p\/\t\m\p\.\N\w\H\P\u\J\Z\d\z\d ]] 00:21:58.598 08:34:50 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:21:58.598 08:34:50 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:58.598 08:34:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:58.598 08:34:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:58.598 08:34:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:58.598 08:34:50 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:58.859 08:34:50 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:21:58.859 08:34:50 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:21:58.859 08:34:50 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:58.859 08:34:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:58.859 08:34:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:58.859 08:34:50 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:58.859 08:34:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:59.427 08:34:51 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:21:59.427 08:34:51 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:59.427 08:34:51 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:59.427 [2024-07-15 08:34:51.532787] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:59.427 nvme0n1 00:21:59.688 08:34:51 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:21:59.688 08:34:51 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:59.688 08:34:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:59.688 08:34:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:59.688 08:34:51 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:59.688 08:34:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:59.947 08:34:51 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:21:59.947 08:34:51 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:21:59.947 08:34:51 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:59.947 08:34:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:59.947 08:34:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:59.947 08:34:51 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:59.947 08:34:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:22:00.206 08:34:52 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:22:00.206 08:34:52 keyring_file -- keyring/file.sh@62 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:00.206 Running I/O for 1 seconds... 00:22:01.140 00:22:01.140 Latency(us) 00:22:01.140 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:01.140 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:22:01.140 nvme0n1 : 1.01 10861.54 42.43 0.00 0.00 11739.47 6404.65 19184.17 00:22:01.140 =================================================================================================================== 00:22:01.140 Total : 10861.54 42.43 0.00 0.00 11739.47 6404.65 19184.17 00:22:01.140 0 00:22:01.140 08:34:53 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:22:01.140 08:34:53 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:22:01.705 08:34:53 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:22:01.705 08:34:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:01.705 08:34:53 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:01.705 08:34:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:01.705 08:34:53 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:01.705 08:34:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:01.705 08:34:53 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:22:01.705 08:34:53 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:22:01.705 08:34:53 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:22:01.705 08:34:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:01.705 08:34:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:22:01.705 08:34:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:01.705 08:34:53 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:01.963 08:34:54 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:22:01.963 08:34:54 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:22:01.963 08:34:54 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:22:01.963 08:34:54 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:22:01.963 08:34:54 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:22:01.963 08:34:54 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:01.963 08:34:54 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:22:01.963 08:34:54 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:01.963 08:34:54 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:22:01.963 08:34:54 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:22:02.221 [2024-07-15 08:34:54.302704] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:02.221 [2024-07-15 08:34:54.303685] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11024f0 (107): Transport endpoint is not connected 00:22:02.221 [2024-07-15 08:34:54.304675] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11024f0 (9): Bad file descriptor 00:22:02.221 [2024-07-15 08:34:54.305670] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:02.221 [2024-07-15 08:34:54.305691] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:22:02.221 [2024-07-15 08:34:54.305718] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:02.221 request: 00:22:02.221 { 00:22:02.221 "name": "nvme0", 00:22:02.221 "trtype": "tcp", 00:22:02.221 "traddr": "127.0.0.1", 00:22:02.221 "adrfam": "ipv4", 00:22:02.221 "trsvcid": "4420", 00:22:02.221 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:02.221 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:02.221 "prchk_reftag": false, 00:22:02.221 "prchk_guard": false, 00:22:02.221 "hdgst": false, 00:22:02.221 "ddgst": false, 00:22:02.221 "psk": "key1", 00:22:02.221 "method": "bdev_nvme_attach_controller", 00:22:02.221 "req_id": 1 00:22:02.221 } 00:22:02.221 Got JSON-RPC error response 00:22:02.221 response: 00:22:02.221 { 00:22:02.221 "code": -5, 00:22:02.221 "message": "Input/output error" 00:22:02.221 } 00:22:02.221 08:34:54 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:22:02.221 08:34:54 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:02.221 08:34:54 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:02.221 08:34:54 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:02.221 08:34:54 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:22:02.221 08:34:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:02.221 08:34:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:02.221 08:34:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:02.221 08:34:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:02.221 08:34:54 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:02.479 08:34:54 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:22:02.479 08:34:54 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:22:02.479 08:34:54 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:22:02.479 08:34:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:02.479 08:34:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:22:02.479 08:34:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:02.479 08:34:54 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:02.737 08:34:54 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:22:02.737 08:34:54 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:22:02.737 08:34:54 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:22:03.303 08:34:55 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:22:03.303 08:34:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:22:03.303 08:34:55 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:22:03.303 08:34:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:03.303 08:34:55 keyring_file -- keyring/file.sh@77 -- # jq length 00:22:03.559 08:34:55 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:22:03.559 08:34:55 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.aNqU7QcJAC 00:22:03.559 08:34:55 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.aNqU7QcJAC 00:22:03.559 08:34:55 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:22:03.559 08:34:55 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.aNqU7QcJAC 00:22:03.559 08:34:55 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:22:03.560 08:34:55 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:03.560 08:34:55 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:22:03.560 08:34:55 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:03.560 08:34:55 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.aNqU7QcJAC 00:22:03.560 08:34:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.aNqU7QcJAC 00:22:03.817 [2024-07-15 08:34:55.936144] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.aNqU7QcJAC': 0100660 00:22:03.817 [2024-07-15 08:34:55.936198] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:22:03.817 request: 00:22:03.817 { 00:22:03.817 "name": "key0", 00:22:03.817 "path": "/tmp/tmp.aNqU7QcJAC", 00:22:03.817 "method": "keyring_file_add_key", 00:22:03.817 "req_id": 1 00:22:03.817 } 00:22:03.817 Got JSON-RPC error response 00:22:03.817 response: 00:22:03.817 { 00:22:03.817 "code": -1, 00:22:03.817 "message": "Operation not permitted" 00:22:03.817 } 00:22:03.817 08:34:55 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:22:03.817 08:34:55 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:03.817 08:34:55 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:03.817 08:34:55 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:03.817 08:34:55 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.aNqU7QcJAC 00:22:03.817 08:34:55 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.aNqU7QcJAC 00:22:03.817 08:34:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.aNqU7QcJAC 00:22:04.075 08:34:56 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.aNqU7QcJAC 00:22:04.075 08:34:56 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:22:04.075 08:34:56 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:04.075 08:34:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:04.075 08:34:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:04.075 08:34:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:04.075 08:34:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:04.333 08:34:56 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:22:04.333 08:34:56 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:04.333 08:34:56 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:22:04.333 08:34:56 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:04.333 08:34:56 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:22:04.333 08:34:56 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:04.333 08:34:56 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:22:04.333 08:34:56 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:04.333 08:34:56 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:04.333 08:34:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:04.590 [2024-07-15 08:34:56.756357] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.aNqU7QcJAC': No such file or directory 00:22:04.590 [2024-07-15 08:34:56.756410] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:22:04.590 [2024-07-15 08:34:56.756468] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:22:04.590 [2024-07-15 08:34:56.756477] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:04.590 [2024-07-15 08:34:56.756486] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:22:04.590 request: 00:22:04.590 { 00:22:04.590 "name": "nvme0", 00:22:04.590 "trtype": "tcp", 00:22:04.590 "traddr": "127.0.0.1", 00:22:04.590 "adrfam": "ipv4", 00:22:04.590 "trsvcid": "4420", 00:22:04.590 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:04.590 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:04.590 "prchk_reftag": false, 00:22:04.590 "prchk_guard": false, 00:22:04.590 "hdgst": false, 00:22:04.590 "ddgst": false, 00:22:04.590 "psk": "key0", 00:22:04.590 "method": "bdev_nvme_attach_controller", 00:22:04.590 "req_id": 1 00:22:04.590 } 00:22:04.590 Got JSON-RPC error response 00:22:04.590 response: 00:22:04.590 { 00:22:04.590 "code": -19, 00:22:04.590 "message": "No such device" 00:22:04.590 } 00:22:04.848 08:34:56 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:22:04.848 08:34:56 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:04.848 08:34:56 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:04.848 08:34:56 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:04.848 08:34:56 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:22:04.848 08:34:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:22:04.848 08:34:57 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:22:04.848 08:34:57 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:22:04.848 08:34:57 keyring_file -- keyring/common.sh@17 -- # name=key0 00:22:04.848 08:34:57 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:22:04.848 08:34:57 keyring_file -- keyring/common.sh@17 -- # digest=0 00:22:04.848 08:34:57 keyring_file -- keyring/common.sh@18 -- # mktemp 00:22:05.105 08:34:57 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.vTVHbQozrM 00:22:05.105 08:34:57 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:22:05.105 08:34:57 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:22:05.105 08:34:57 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:22:05.105 08:34:57 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:05.105 08:34:57 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:22:05.105 08:34:57 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:22:05.105 08:34:57 keyring_file -- nvmf/common.sh@705 -- # python - 00:22:05.105 08:34:57 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.vTVHbQozrM 00:22:05.105 08:34:57 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.vTVHbQozrM 00:22:05.105 08:34:57 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.vTVHbQozrM 00:22:05.105 08:34:57 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.vTVHbQozrM 00:22:05.105 08:34:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.vTVHbQozrM 00:22:05.361 08:34:57 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:05.361 08:34:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:05.646 nvme0n1 00:22:05.646 08:34:57 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:22:05.646 08:34:57 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:05.646 08:34:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:05.646 08:34:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:05.646 08:34:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:05.646 08:34:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:05.903 08:34:57 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:22:05.903 08:34:57 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:22:05.903 08:34:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:22:06.160 08:34:58 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:22:06.160 08:34:58 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:22:06.160 08:34:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:06.160 08:34:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:06.160 08:34:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:06.419 08:34:58 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:22:06.419 08:34:58 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:22:06.419 08:34:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:06.419 08:34:58 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:06.419 08:34:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:06.419 08:34:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:06.419 08:34:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:06.678 08:34:58 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:22:06.678 08:34:58 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:22:06.678 08:34:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:22:06.936 08:34:59 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:22:06.936 08:34:59 keyring_file -- keyring/file.sh@104 -- # jq length 00:22:06.936 08:34:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:07.193 08:34:59 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:22:07.193 08:34:59 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.vTVHbQozrM 00:22:07.193 08:34:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.vTVHbQozrM 00:22:07.451 08:34:59 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.NwHPuJZdzd 00:22:07.451 08:34:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.NwHPuJZdzd 00:22:07.709 08:34:59 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:07.710 08:34:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:08.276 nvme0n1 00:22:08.276 08:35:00 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:22:08.276 08:35:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:22:08.535 08:35:00 keyring_file -- keyring/file.sh@112 -- # config='{ 00:22:08.535 "subsystems": [ 00:22:08.535 { 00:22:08.535 "subsystem": "keyring", 00:22:08.535 "config": [ 00:22:08.535 { 00:22:08.535 "method": "keyring_file_add_key", 00:22:08.535 "params": { 00:22:08.535 "name": "key0", 00:22:08.535 "path": "/tmp/tmp.vTVHbQozrM" 00:22:08.535 } 00:22:08.535 }, 00:22:08.535 { 00:22:08.535 "method": "keyring_file_add_key", 00:22:08.535 "params": { 00:22:08.535 "name": "key1", 00:22:08.535 "path": "/tmp/tmp.NwHPuJZdzd" 00:22:08.535 } 00:22:08.535 } 00:22:08.535 ] 00:22:08.535 }, 00:22:08.535 { 00:22:08.535 "subsystem": "iobuf", 00:22:08.535 "config": [ 00:22:08.535 { 00:22:08.535 "method": "iobuf_set_options", 00:22:08.535 "params": { 00:22:08.535 "small_pool_count": 8192, 00:22:08.535 "large_pool_count": 1024, 00:22:08.535 "small_bufsize": 8192, 00:22:08.535 "large_bufsize": 135168 00:22:08.535 } 00:22:08.535 } 00:22:08.535 ] 00:22:08.535 }, 00:22:08.535 { 00:22:08.535 "subsystem": "sock", 00:22:08.535 "config": [ 00:22:08.535 { 00:22:08.536 "method": "sock_set_default_impl", 00:22:08.536 "params": { 00:22:08.536 "impl_name": "uring" 00:22:08.536 } 00:22:08.536 }, 00:22:08.536 { 00:22:08.536 "method": "sock_impl_set_options", 00:22:08.536 "params": { 00:22:08.536 "impl_name": "ssl", 00:22:08.536 "recv_buf_size": 4096, 00:22:08.536 "send_buf_size": 4096, 00:22:08.536 "enable_recv_pipe": true, 00:22:08.536 "enable_quickack": false, 00:22:08.536 "enable_placement_id": 0, 00:22:08.536 "enable_zerocopy_send_server": true, 00:22:08.536 "enable_zerocopy_send_client": false, 00:22:08.536 "zerocopy_threshold": 0, 00:22:08.536 "tls_version": 0, 00:22:08.536 "enable_ktls": false 00:22:08.536 } 00:22:08.536 }, 00:22:08.536 { 00:22:08.536 "method": "sock_impl_set_options", 00:22:08.536 "params": { 00:22:08.536 "impl_name": "posix", 00:22:08.536 "recv_buf_size": 2097152, 00:22:08.536 "send_buf_size": 2097152, 00:22:08.536 "enable_recv_pipe": true, 00:22:08.536 "enable_quickack": false, 00:22:08.536 "enable_placement_id": 0, 00:22:08.536 "enable_zerocopy_send_server": true, 00:22:08.536 "enable_zerocopy_send_client": false, 00:22:08.536 "zerocopy_threshold": 0, 00:22:08.536 "tls_version": 0, 00:22:08.536 "enable_ktls": false 00:22:08.536 } 00:22:08.536 }, 00:22:08.536 { 00:22:08.536 "method": "sock_impl_set_options", 00:22:08.536 "params": { 00:22:08.536 "impl_name": "uring", 00:22:08.536 "recv_buf_size": 2097152, 00:22:08.536 "send_buf_size": 2097152, 00:22:08.536 "enable_recv_pipe": true, 00:22:08.536 "enable_quickack": false, 00:22:08.536 "enable_placement_id": 0, 00:22:08.536 "enable_zerocopy_send_server": false, 00:22:08.536 "enable_zerocopy_send_client": false, 00:22:08.536 "zerocopy_threshold": 0, 00:22:08.536 "tls_version": 0, 00:22:08.536 "enable_ktls": false 00:22:08.536 } 00:22:08.536 } 00:22:08.536 ] 00:22:08.536 }, 00:22:08.536 { 00:22:08.536 "subsystem": "vmd", 00:22:08.536 "config": [] 00:22:08.536 }, 00:22:08.536 { 00:22:08.536 "subsystem": "accel", 00:22:08.536 "config": [ 00:22:08.536 { 00:22:08.536 "method": "accel_set_options", 00:22:08.536 "params": { 00:22:08.536 "small_cache_size": 128, 00:22:08.536 "large_cache_size": 16, 00:22:08.536 "task_count": 2048, 00:22:08.536 "sequence_count": 2048, 00:22:08.536 "buf_count": 2048 00:22:08.536 } 00:22:08.536 } 00:22:08.536 ] 00:22:08.536 }, 00:22:08.536 { 00:22:08.536 "subsystem": "bdev", 00:22:08.536 "config": [ 00:22:08.536 { 00:22:08.536 "method": "bdev_set_options", 00:22:08.536 "params": { 00:22:08.536 "bdev_io_pool_size": 65535, 00:22:08.536 "bdev_io_cache_size": 256, 00:22:08.536 "bdev_auto_examine": true, 00:22:08.536 "iobuf_small_cache_size": 128, 00:22:08.536 "iobuf_large_cache_size": 16 00:22:08.536 } 00:22:08.536 }, 00:22:08.536 { 00:22:08.536 "method": "bdev_raid_set_options", 00:22:08.536 "params": { 00:22:08.536 "process_window_size_kb": 1024 00:22:08.536 } 00:22:08.536 }, 00:22:08.536 { 00:22:08.536 "method": "bdev_iscsi_set_options", 00:22:08.536 "params": { 00:22:08.536 "timeout_sec": 30 00:22:08.536 } 00:22:08.536 }, 00:22:08.536 { 00:22:08.536 "method": "bdev_nvme_set_options", 00:22:08.536 "params": { 00:22:08.536 "action_on_timeout": "none", 00:22:08.536 "timeout_us": 0, 00:22:08.536 "timeout_admin_us": 0, 00:22:08.536 "keep_alive_timeout_ms": 10000, 00:22:08.536 "arbitration_burst": 0, 00:22:08.536 "low_priority_weight": 0, 00:22:08.536 "medium_priority_weight": 0, 00:22:08.536 "high_priority_weight": 0, 00:22:08.536 "nvme_adminq_poll_period_us": 10000, 00:22:08.536 "nvme_ioq_poll_period_us": 0, 00:22:08.536 "io_queue_requests": 512, 00:22:08.536 "delay_cmd_submit": true, 00:22:08.536 "transport_retry_count": 4, 00:22:08.536 "bdev_retry_count": 3, 00:22:08.536 "transport_ack_timeout": 0, 00:22:08.536 "ctrlr_loss_timeout_sec": 0, 00:22:08.536 "reconnect_delay_sec": 0, 00:22:08.536 "fast_io_fail_timeout_sec": 0, 00:22:08.536 "disable_auto_failback": false, 00:22:08.536 "generate_uuids": false, 00:22:08.536 "transport_tos": 0, 00:22:08.536 "nvme_error_stat": false, 00:22:08.536 "rdma_srq_size": 0, 00:22:08.536 "io_path_stat": false, 00:22:08.536 "allow_accel_sequence": false, 00:22:08.536 "rdma_max_cq_size": 0, 00:22:08.536 "rdma_cm_event_timeout_ms": 0, 00:22:08.536 "dhchap_digests": [ 00:22:08.536 "sha256", 00:22:08.536 "sha384", 00:22:08.536 "sha512" 00:22:08.536 ], 00:22:08.536 "dhchap_dhgroups": [ 00:22:08.536 "null", 00:22:08.536 "ffdhe2048", 00:22:08.536 "ffdhe3072", 00:22:08.536 "ffdhe4096", 00:22:08.536 "ffdhe6144", 00:22:08.536 "ffdhe8192" 00:22:08.536 ] 00:22:08.536 } 00:22:08.536 }, 00:22:08.536 { 00:22:08.536 "method": "bdev_nvme_attach_controller", 00:22:08.536 "params": { 00:22:08.536 "name": "nvme0", 00:22:08.536 "trtype": "TCP", 00:22:08.536 "adrfam": "IPv4", 00:22:08.536 "traddr": "127.0.0.1", 00:22:08.536 "trsvcid": "4420", 00:22:08.536 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:08.536 "prchk_reftag": false, 00:22:08.536 "prchk_guard": false, 00:22:08.536 "ctrlr_loss_timeout_sec": 0, 00:22:08.536 "reconnect_delay_sec": 0, 00:22:08.536 "fast_io_fail_timeout_sec": 0, 00:22:08.536 "psk": "key0", 00:22:08.536 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:08.536 "hdgst": false, 00:22:08.536 "ddgst": false 00:22:08.536 } 00:22:08.536 }, 00:22:08.536 { 00:22:08.536 "method": "bdev_nvme_set_hotplug", 00:22:08.536 "params": { 00:22:08.536 "period_us": 100000, 00:22:08.536 "enable": false 00:22:08.536 } 00:22:08.536 }, 00:22:08.536 { 00:22:08.536 "method": "bdev_wait_for_examine" 00:22:08.536 } 00:22:08.536 ] 00:22:08.536 }, 00:22:08.536 { 00:22:08.536 "subsystem": "nbd", 00:22:08.536 "config": [] 00:22:08.536 } 00:22:08.536 ] 00:22:08.536 }' 00:22:08.536 08:35:00 keyring_file -- keyring/file.sh@114 -- # killprocess 85416 00:22:08.536 08:35:00 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 85416 ']' 00:22:08.536 08:35:00 keyring_file -- common/autotest_common.sh@952 -- # kill -0 85416 00:22:08.536 08:35:00 keyring_file -- common/autotest_common.sh@953 -- # uname 00:22:08.536 08:35:00 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:08.536 08:35:00 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85416 00:22:08.536 killing process with pid 85416 00:22:08.536 Received shutdown signal, test time was about 1.000000 seconds 00:22:08.536 00:22:08.536 Latency(us) 00:22:08.536 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:08.536 =================================================================================================================== 00:22:08.536 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:08.536 08:35:00 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:08.536 08:35:00 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:08.536 08:35:00 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85416' 00:22:08.536 08:35:00 keyring_file -- common/autotest_common.sh@967 -- # kill 85416 00:22:08.536 08:35:00 keyring_file -- common/autotest_common.sh@972 -- # wait 85416 00:22:08.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:08.850 08:35:00 keyring_file -- keyring/file.sh@117 -- # bperfpid=85671 00:22:08.850 08:35:00 keyring_file -- keyring/file.sh@115 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:22:08.850 08:35:00 keyring_file -- keyring/file.sh@119 -- # waitforlisten 85671 /var/tmp/bperf.sock 00:22:08.850 08:35:00 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 85671 ']' 00:22:08.850 08:35:00 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:08.850 08:35:00 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:08.850 08:35:00 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:22:08.850 "subsystems": [ 00:22:08.850 { 00:22:08.850 "subsystem": "keyring", 00:22:08.850 "config": [ 00:22:08.850 { 00:22:08.850 "method": "keyring_file_add_key", 00:22:08.850 "params": { 00:22:08.850 "name": "key0", 00:22:08.850 "path": "/tmp/tmp.vTVHbQozrM" 00:22:08.850 } 00:22:08.850 }, 00:22:08.850 { 00:22:08.850 "method": "keyring_file_add_key", 00:22:08.850 "params": { 00:22:08.850 "name": "key1", 00:22:08.850 "path": "/tmp/tmp.NwHPuJZdzd" 00:22:08.850 } 00:22:08.850 } 00:22:08.850 ] 00:22:08.850 }, 00:22:08.850 { 00:22:08.850 "subsystem": "iobuf", 00:22:08.850 "config": [ 00:22:08.850 { 00:22:08.850 "method": "iobuf_set_options", 00:22:08.850 "params": { 00:22:08.850 "small_pool_count": 8192, 00:22:08.850 "large_pool_count": 1024, 00:22:08.850 "small_bufsize": 8192, 00:22:08.850 "large_bufsize": 135168 00:22:08.850 } 00:22:08.850 } 00:22:08.850 ] 00:22:08.850 }, 00:22:08.850 { 00:22:08.850 "subsystem": "sock", 00:22:08.850 "config": [ 00:22:08.850 { 00:22:08.850 "method": "sock_set_default_impl", 00:22:08.850 "params": { 00:22:08.850 "impl_name": "uring" 00:22:08.850 } 00:22:08.850 }, 00:22:08.850 { 00:22:08.850 "method": "sock_impl_set_options", 00:22:08.850 "params": { 00:22:08.850 "impl_name": "ssl", 00:22:08.850 "recv_buf_size": 4096, 00:22:08.850 "send_buf_size": 4096, 00:22:08.850 "enable_recv_pipe": true, 00:22:08.850 "enable_quickack": false, 00:22:08.850 "enable_placement_id": 0, 00:22:08.850 "enable_zerocopy_send_server": true, 00:22:08.850 "enable_zerocopy_send_client": false, 00:22:08.850 "zerocopy_threshold": 0, 00:22:08.850 "tls_version": 0, 00:22:08.850 "enable_ktls": false 00:22:08.850 } 00:22:08.850 }, 00:22:08.850 { 00:22:08.850 "method": "sock_impl_set_options", 00:22:08.850 "params": { 00:22:08.850 "impl_name": "posix", 00:22:08.850 "recv_buf_size": 2097152, 00:22:08.850 "send_buf_size": 2097152, 00:22:08.850 "enable_recv_pipe": true, 00:22:08.850 "enable_quickack": false, 00:22:08.850 "enable_placement_id": 0, 00:22:08.850 "enable_zerocopy_send_server": true, 00:22:08.850 "enable_zerocopy_send_client": false, 00:22:08.850 "zerocopy_threshold": 0, 00:22:08.850 "tls_version": 0, 00:22:08.850 "enable_ktls": false 00:22:08.850 } 00:22:08.850 }, 00:22:08.851 { 00:22:08.851 "method": "sock_impl_set_options", 00:22:08.851 "params": { 00:22:08.851 "impl_name": "uring", 00:22:08.851 "recv_buf_size": 2097152, 00:22:08.851 "send_buf_size": 2097152, 00:22:08.851 "enable_recv_pipe": true, 00:22:08.851 "enable_quickack": false, 00:22:08.851 "enable_placement_id": 0, 00:22:08.851 "enable_zerocopy_send_server": false, 00:22:08.851 "enable_zerocopy_send_client": false, 00:22:08.851 "zerocopy_threshold": 0, 00:22:08.851 "tls_version": 0, 00:22:08.851 "enable_ktls": false 00:22:08.851 } 00:22:08.851 } 00:22:08.851 ] 00:22:08.851 }, 00:22:08.851 { 00:22:08.851 "subsystem": "vmd", 00:22:08.851 "config": [] 00:22:08.851 }, 00:22:08.851 { 00:22:08.851 "subsystem": "accel", 00:22:08.851 "config": [ 00:22:08.851 { 00:22:08.851 "method": "accel_set_options", 00:22:08.851 "params": { 00:22:08.851 "small_cache_size": 128, 00:22:08.851 "large_cache_size": 16, 00:22:08.851 "task_count": 2048, 00:22:08.851 "sequence_count": 2048, 00:22:08.851 "buf_count": 2048 00:22:08.851 } 00:22:08.851 } 00:22:08.851 ] 00:22:08.851 }, 00:22:08.851 { 00:22:08.851 "subsystem": "bdev", 00:22:08.851 "config": [ 00:22:08.851 { 00:22:08.851 "method": "bdev_set_options", 00:22:08.851 "params": { 00:22:08.851 "bdev_io_pool_size": 65535, 00:22:08.851 "bdev_io_cache_size": 256, 00:22:08.851 "bdev_auto_examine": true, 00:22:08.851 "iobuf_small_cache_size": 128, 00:22:08.851 "iobuf_large_cache_size": 16 00:22:08.851 } 00:22:08.851 }, 00:22:08.851 { 00:22:08.851 "method": "bdev_raid_set_options", 00:22:08.851 "params": { 00:22:08.851 "process_window_size_kb": 1024 00:22:08.851 } 00:22:08.851 }, 00:22:08.851 { 00:22:08.851 "method": "bdev_iscsi_set_options", 00:22:08.851 "params": { 00:22:08.851 "timeout_sec": 30 00:22:08.851 } 00:22:08.851 }, 00:22:08.851 { 00:22:08.851 "method": "bdev_nvme_set_options", 00:22:08.851 "params": { 00:22:08.851 "action_on_timeout": "none", 00:22:08.851 "timeout_us": 0, 00:22:08.851 "timeout_admin_us": 0, 00:22:08.851 "keep_alive_timeout_ms": 10000, 00:22:08.851 "arbitration_burst": 0, 00:22:08.851 "low_priority_weight": 0, 00:22:08.851 "medium_priority_weight": 0, 00:22:08.851 "high_priority_weight": 0, 00:22:08.851 "nvme_adminq_poll_period_us": 10000, 00:22:08.851 "nvme_ioq_poll_period_us": 0, 00:22:08.851 "io_queue_requests": 512, 00:22:08.851 "delay_cmd_submit": true, 00:22:08.851 "transport_retry_count": 4, 00:22:08.851 "bdev_retry_count": 3, 00:22:08.851 "transport_ack_timeout": 0, 00:22:08.851 "ctrlr_loss_timeout_sec": 0, 00:22:08.851 "reconnect_delay_sec": 0, 00:22:08.851 "fast_io_fail_timeout_sec": 0, 00:22:08.851 "disable_auto_failback": false, 00:22:08.851 "generate_uuids": false, 00:22:08.851 "transport_tos": 0, 00:22:08.851 "nvme_error_stat": false, 00:22:08.851 "rdma_srq_size": 0, 00:22:08.851 "io_path_stat": false, 00:22:08.851 "allow_accel_sequence": false, 00:22:08.851 "rdma_max_cq_size": 0, 00:22:08.851 "rdma_cm_event_timeout_ms": 0, 00:22:08.851 "dhchap_digests": [ 00:22:08.851 "sha256", 00:22:08.851 "sha384", 00:22:08.851 "sha512" 00:22:08.851 ], 00:22:08.851 "dhchap_dhgroups": [ 00:22:08.851 "null", 00:22:08.851 "ffdhe2048", 00:22:08.851 "ffdhe3072", 00:22:08.851 "ffdhe4096", 00:22:08.851 "ffdhe6144", 00:22:08.851 "ffdhe8192" 00:22:08.851 ] 00:22:08.851 } 00:22:08.851 }, 00:22:08.851 { 00:22:08.851 "method": "bdev_nvme_attach_controller", 00:22:08.851 "params": { 00:22:08.851 "name": "nvme0", 00:22:08.851 "trtype": "TCP", 00:22:08.851 "adrfam": "IPv4", 00:22:08.851 "traddr": "127.0.0.1", 00:22:08.851 "trsvcid": "4420", 00:22:08.851 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:08.851 "prchk_reftag": false, 00:22:08.851 "prchk_guard": false, 00:22:08.851 "ctrlr_loss_timeout_sec": 0, 00:22:08.851 "reconnect_delay_sec": 0, 00:22:08.851 "fast_io_fail_timeout_sec": 0, 00:22:08.851 "psk": "key0", 00:22:08.851 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:08.851 "hdgst": false, 00:22:08.851 "ddgst": false 00:22:08.851 } 00:22:08.851 }, 00:22:08.851 { 00:22:08.851 "method": "bdev_nvme_set_hotplug", 00:22:08.851 "params": { 00:22:08.851 "period_us": 100000, 00:22:08.851 "enable": false 00:22:08.851 } 00:22:08.851 }, 00:22:08.851 { 00:22:08.851 "method": "bdev_wait_for_examine" 00:22:08.851 } 00:22:08.851 ] 00:22:08.851 }, 00:22:08.851 { 00:22:08.851 "subsystem": "nbd", 00:22:08.851 "config": [] 00:22:08.851 } 00:22:08.851 ] 00:22:08.851 }' 00:22:08.851 08:35:00 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:08.851 08:35:00 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:08.851 08:35:00 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:22:08.851 [2024-07-15 08:35:00.780094] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:22:08.851 [2024-07-15 08:35:00.780189] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85671 ] 00:22:08.851 [2024-07-15 08:35:00.913070] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:08.851 [2024-07-15 08:35:01.024270] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:09.110 [2024-07-15 08:35:01.160897] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:22:09.110 [2024-07-15 08:35:01.215037] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:09.676 08:35:01 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:09.676 08:35:01 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:22:09.676 08:35:01 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:22:09.676 08:35:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:09.676 08:35:01 keyring_file -- keyring/file.sh@120 -- # jq length 00:22:09.934 08:35:01 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:22:09.934 08:35:01 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:22:09.934 08:35:01 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:09.934 08:35:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:09.934 08:35:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:09.934 08:35:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:09.934 08:35:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:10.193 08:35:02 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:22:10.193 08:35:02 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:22:10.193 08:35:02 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:22:10.193 08:35:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:10.193 08:35:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:10.193 08:35:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:10.193 08:35:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:22:10.452 08:35:02 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:22:10.452 08:35:02 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:22:10.452 08:35:02 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:22:10.452 08:35:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:22:10.742 08:35:02 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:22:10.742 08:35:02 keyring_file -- keyring/file.sh@1 -- # cleanup 00:22:10.742 08:35:02 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.vTVHbQozrM /tmp/tmp.NwHPuJZdzd 00:22:10.742 08:35:02 keyring_file -- keyring/file.sh@20 -- # killprocess 85671 00:22:10.742 08:35:02 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 85671 ']' 00:22:10.742 08:35:02 keyring_file -- common/autotest_common.sh@952 -- # kill -0 85671 00:22:10.742 08:35:02 keyring_file -- common/autotest_common.sh@953 -- # uname 00:22:10.742 08:35:02 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:10.742 08:35:02 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85671 00:22:10.742 killing process with pid 85671 00:22:10.742 Received shutdown signal, test time was about 1.000000 seconds 00:22:10.742 00:22:10.742 Latency(us) 00:22:10.742 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:10.742 =================================================================================================================== 00:22:10.742 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:10.742 08:35:02 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:10.742 08:35:02 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:10.742 08:35:02 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85671' 00:22:10.742 08:35:02 keyring_file -- common/autotest_common.sh@967 -- # kill 85671 00:22:10.742 08:35:02 keyring_file -- common/autotest_common.sh@972 -- # wait 85671 00:22:11.001 08:35:02 keyring_file -- keyring/file.sh@21 -- # killprocess 85399 00:22:11.001 08:35:02 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 85399 ']' 00:22:11.001 08:35:02 keyring_file -- common/autotest_common.sh@952 -- # kill -0 85399 00:22:11.001 08:35:02 keyring_file -- common/autotest_common.sh@953 -- # uname 00:22:11.001 08:35:02 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:11.001 08:35:03 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85399 00:22:11.001 killing process with pid 85399 00:22:11.001 08:35:03 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:11.001 08:35:03 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:11.001 08:35:03 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85399' 00:22:11.001 08:35:03 keyring_file -- common/autotest_common.sh@967 -- # kill 85399 00:22:11.001 [2024-07-15 08:35:03.019505] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:11.001 08:35:03 keyring_file -- common/autotest_common.sh@972 -- # wait 85399 00:22:11.566 00:22:11.566 real 0m16.191s 00:22:11.566 user 0m40.306s 00:22:11.566 sys 0m3.151s 00:22:11.566 08:35:03 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:11.566 ************************************ 00:22:11.566 END TEST keyring_file 00:22:11.566 ************************************ 00:22:11.566 08:35:03 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:22:11.566 08:35:03 -- common/autotest_common.sh@1142 -- # return 0 00:22:11.566 08:35:03 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:22:11.566 08:35:03 -- spdk/autotest.sh@297 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:22:11.566 08:35:03 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:11.566 08:35:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:11.566 08:35:03 -- common/autotest_common.sh@10 -- # set +x 00:22:11.566 ************************************ 00:22:11.566 START TEST keyring_linux 00:22:11.566 ************************************ 00:22:11.566 08:35:03 keyring_linux -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:22:11.566 * Looking for test storage... 00:22:11.566 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:22:11.566 08:35:03 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:22:11.566 08:35:03 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:11.566 08:35:03 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:22:11.566 08:35:03 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:11.566 08:35:03 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:11.566 08:35:03 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:11.566 08:35:03 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:11.566 08:35:03 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:11.566 08:35:03 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:11.566 08:35:03 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:11.566 08:35:03 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:11.566 08:35:03 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:11.566 08:35:03 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:11.566 08:35:03 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:22:11.566 08:35:03 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=cd0d64d4-8ee8-499e-819c-5b5e52cf5ed6 00:22:11.566 08:35:03 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:11.566 08:35:03 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:11.566 08:35:03 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:11.566 08:35:03 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:11.566 08:35:03 keyring_linux -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:11.566 08:35:03 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:11.566 08:35:03 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:11.566 08:35:03 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:11.566 08:35:03 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.566 08:35:03 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.567 08:35:03 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.567 08:35:03 keyring_linux -- paths/export.sh@5 -- # export PATH 00:22:11.567 08:35:03 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.567 08:35:03 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:22:11.567 08:35:03 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:11.567 08:35:03 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:11.567 08:35:03 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:11.567 08:35:03 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:11.567 08:35:03 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:11.567 08:35:03 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:11.567 08:35:03 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:11.567 08:35:03 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:11.567 08:35:03 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:22:11.567 08:35:03 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:22:11.567 08:35:03 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:22:11.567 08:35:03 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:22:11.567 08:35:03 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:22:11.567 08:35:03 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:22:11.567 08:35:03 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:22:11.567 08:35:03 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:22:11.567 08:35:03 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:22:11.567 08:35:03 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:22:11.567 08:35:03 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:22:11.567 08:35:03 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:22:11.567 08:35:03 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:22:11.567 08:35:03 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:22:11.567 08:35:03 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:22:11.567 08:35:03 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:11.567 08:35:03 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:22:11.567 08:35:03 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:22:11.567 08:35:03 keyring_linux -- nvmf/common.sh@705 -- # python - 00:22:11.567 08:35:03 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:22:11.567 /tmp/:spdk-test:key0 00:22:11.567 08:35:03 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:22:11.567 08:35:03 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:22:11.567 08:35:03 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:22:11.567 08:35:03 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:22:11.567 08:35:03 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:22:11.567 08:35:03 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:22:11.567 08:35:03 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:22:11.567 08:35:03 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:22:11.567 08:35:03 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:22:11.567 08:35:03 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:22:11.567 08:35:03 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:11.567 08:35:03 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:22:11.567 08:35:03 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:22:11.567 08:35:03 keyring_linux -- nvmf/common.sh@705 -- # python - 00:22:11.567 08:35:03 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:22:11.567 /tmp/:spdk-test:key1 00:22:11.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:11.567 08:35:03 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:22:11.567 08:35:03 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=85785 00:22:11.567 08:35:03 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:11.567 08:35:03 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 85785 00:22:11.567 08:35:03 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 85785 ']' 00:22:11.567 08:35:03 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:11.567 08:35:03 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:11.567 08:35:03 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:11.567 08:35:03 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:11.567 08:35:03 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:22:11.824 [2024-07-15 08:35:03.780152] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:22:11.824 [2024-07-15 08:35:03.780552] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85785 ] 00:22:11.824 [2024-07-15 08:35:03.917136] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:12.081 [2024-07-15 08:35:04.037904] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:12.081 [2024-07-15 08:35:04.093497] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:22:12.692 08:35:04 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:12.692 08:35:04 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:22:12.692 08:35:04 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:22:12.692 08:35:04 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.692 08:35:04 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:22:12.692 [2024-07-15 08:35:04.803709] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:12.692 null0 00:22:12.692 [2024-07-15 08:35:04.835618] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:12.692 [2024-07-15 08:35:04.836039] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:22:12.692 08:35:04 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.692 08:35:04 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:22:12.692 656159429 00:22:12.692 08:35:04 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:22:12.692 880171095 00:22:12.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:12.950 08:35:04 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=85803 00:22:12.950 08:35:04 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:22:12.950 08:35:04 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 85803 /var/tmp/bperf.sock 00:22:12.950 08:35:04 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 85803 ']' 00:22:12.950 08:35:04 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:12.950 08:35:04 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:12.950 08:35:04 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:12.950 08:35:04 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:12.950 08:35:04 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:22:12.950 [2024-07-15 08:35:04.944850] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:22:12.950 [2024-07-15 08:35:04.945296] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85803 ] 00:22:12.950 [2024-07-15 08:35:05.095591] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:13.208 [2024-07-15 08:35:05.226693] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:14.144 08:35:05 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:14.144 08:35:05 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:22:14.144 08:35:05 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:22:14.144 08:35:05 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:22:14.144 08:35:06 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:22:14.144 08:35:06 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:14.402 [2024-07-15 08:35:06.534244] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:22:14.660 08:35:06 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:22:14.660 08:35:06 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:22:14.660 [2024-07-15 08:35:06.795933] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:14.918 nvme0n1 00:22:14.918 08:35:06 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:22:14.918 08:35:06 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:22:14.918 08:35:06 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:22:14.918 08:35:06 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:22:14.918 08:35:06 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:14.918 08:35:06 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:22:15.177 08:35:07 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:22:15.177 08:35:07 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:22:15.177 08:35:07 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:22:15.177 08:35:07 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:22:15.177 08:35:07 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:15.177 08:35:07 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:15.177 08:35:07 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:22:15.436 08:35:07 keyring_linux -- keyring/linux.sh@25 -- # sn=656159429 00:22:15.436 08:35:07 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:22:15.436 08:35:07 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:22:15.436 08:35:07 keyring_linux -- keyring/linux.sh@26 -- # [[ 656159429 == \6\5\6\1\5\9\4\2\9 ]] 00:22:15.436 08:35:07 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 656159429 00:22:15.436 08:35:07 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:22:15.436 08:35:07 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:15.436 Running I/O for 1 seconds... 00:22:16.811 00:22:16.811 Latency(us) 00:22:16.811 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:16.811 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:22:16.811 nvme0n1 : 1.01 11466.99 44.79 0.00 0.00 11091.69 2934.23 12571.00 00:22:16.811 =================================================================================================================== 00:22:16.811 Total : 11466.99 44.79 0.00 0.00 11091.69 2934.23 12571.00 00:22:16.811 0 00:22:16.811 08:35:08 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:22:16.811 08:35:08 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:22:16.811 08:35:08 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:22:16.811 08:35:08 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:22:16.811 08:35:08 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:22:16.811 08:35:08 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:22:16.811 08:35:08 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:16.811 08:35:08 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:22:17.069 08:35:09 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:22:17.069 08:35:09 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:22:17.069 08:35:09 keyring_linux -- keyring/linux.sh@23 -- # return 00:22:17.069 08:35:09 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:22:17.069 08:35:09 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:22:17.069 08:35:09 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:22:17.069 08:35:09 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:22:17.069 08:35:09 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:17.069 08:35:09 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:22:17.069 08:35:09 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:17.069 08:35:09 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:22:17.069 08:35:09 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:22:17.327 [2024-07-15 08:35:09.379965] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:17.327 [2024-07-15 08:35:09.380789] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x136b460 (107): Transport endpoint is not connected 00:22:17.327 [2024-07-15 08:35:09.381778] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x136b460 (9): Bad file descriptor 00:22:17.327 [2024-07-15 08:35:09.382775] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:17.327 [2024-07-15 08:35:09.382798] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:22:17.327 [2024-07-15 08:35:09.382810] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:17.327 request: 00:22:17.327 { 00:22:17.327 "name": "nvme0", 00:22:17.327 "trtype": "tcp", 00:22:17.327 "traddr": "127.0.0.1", 00:22:17.327 "adrfam": "ipv4", 00:22:17.327 "trsvcid": "4420", 00:22:17.327 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:17.327 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:17.327 "prchk_reftag": false, 00:22:17.327 "prchk_guard": false, 00:22:17.327 "hdgst": false, 00:22:17.327 "ddgst": false, 00:22:17.327 "psk": ":spdk-test:key1", 00:22:17.327 "method": "bdev_nvme_attach_controller", 00:22:17.327 "req_id": 1 00:22:17.327 } 00:22:17.327 Got JSON-RPC error response 00:22:17.327 response: 00:22:17.327 { 00:22:17.327 "code": -5, 00:22:17.327 "message": "Input/output error" 00:22:17.327 } 00:22:17.327 08:35:09 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:22:17.327 08:35:09 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:17.327 08:35:09 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:17.327 08:35:09 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:17.327 08:35:09 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:22:17.327 08:35:09 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:22:17.327 08:35:09 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:22:17.327 08:35:09 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:22:17.327 08:35:09 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:22:17.327 08:35:09 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:22:17.327 08:35:09 keyring_linux -- keyring/linux.sh@33 -- # sn=656159429 00:22:17.327 08:35:09 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 656159429 00:22:17.327 1 links removed 00:22:17.327 08:35:09 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:22:17.327 08:35:09 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:22:17.327 08:35:09 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:22:17.327 08:35:09 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:22:17.327 08:35:09 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:22:17.327 08:35:09 keyring_linux -- keyring/linux.sh@33 -- # sn=880171095 00:22:17.327 08:35:09 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 880171095 00:22:17.327 1 links removed 00:22:17.327 08:35:09 keyring_linux -- keyring/linux.sh@41 -- # killprocess 85803 00:22:17.327 08:35:09 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 85803 ']' 00:22:17.327 08:35:09 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 85803 00:22:17.327 08:35:09 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:22:17.327 08:35:09 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:17.327 08:35:09 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85803 00:22:17.327 killing process with pid 85803 00:22:17.327 Received shutdown signal, test time was about 1.000000 seconds 00:22:17.327 00:22:17.327 Latency(us) 00:22:17.327 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:17.327 =================================================================================================================== 00:22:17.328 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:17.328 08:35:09 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:17.328 08:35:09 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:17.328 08:35:09 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85803' 00:22:17.328 08:35:09 keyring_linux -- common/autotest_common.sh@967 -- # kill 85803 00:22:17.328 08:35:09 keyring_linux -- common/autotest_common.sh@972 -- # wait 85803 00:22:17.585 08:35:09 keyring_linux -- keyring/linux.sh@42 -- # killprocess 85785 00:22:17.586 08:35:09 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 85785 ']' 00:22:17.586 08:35:09 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 85785 00:22:17.586 08:35:09 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:22:17.586 08:35:09 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:17.586 08:35:09 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85785 00:22:17.586 killing process with pid 85785 00:22:17.586 08:35:09 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:17.586 08:35:09 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:17.586 08:35:09 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85785' 00:22:17.586 08:35:09 keyring_linux -- common/autotest_common.sh@967 -- # kill 85785 00:22:17.586 08:35:09 keyring_linux -- common/autotest_common.sh@972 -- # wait 85785 00:22:18.152 ************************************ 00:22:18.152 END TEST keyring_linux 00:22:18.152 ************************************ 00:22:18.152 00:22:18.152 real 0m6.612s 00:22:18.152 user 0m12.924s 00:22:18.152 sys 0m1.614s 00:22:18.152 08:35:10 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:18.152 08:35:10 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:22:18.152 08:35:10 -- common/autotest_common.sh@1142 -- # return 0 00:22:18.152 08:35:10 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:22:18.152 08:35:10 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:22:18.152 08:35:10 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:22:18.152 08:35:10 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:22:18.152 08:35:10 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:22:18.152 08:35:10 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:22:18.152 08:35:10 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:22:18.152 08:35:10 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:22:18.152 08:35:10 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:22:18.152 08:35:10 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:22:18.152 08:35:10 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:22:18.152 08:35:10 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:22:18.152 08:35:10 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:22:18.152 08:35:10 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:22:18.152 08:35:10 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:22:18.152 08:35:10 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:22:18.152 08:35:10 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:22:18.152 08:35:10 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:18.152 08:35:10 -- common/autotest_common.sh@10 -- # set +x 00:22:18.152 08:35:10 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:22:18.152 08:35:10 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:22:18.152 08:35:10 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:22:18.152 08:35:10 -- common/autotest_common.sh@10 -- # set +x 00:22:19.527 INFO: APP EXITING 00:22:19.527 INFO: killing all VMs 00:22:19.527 INFO: killing vhost app 00:22:19.527 INFO: EXIT DONE 00:22:20.463 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:20.463 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:22:20.463 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:22:21.031 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:21.031 Cleaning 00:22:21.031 Removing: /var/run/dpdk/spdk0/config 00:22:21.031 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:22:21.031 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:22:21.031 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:22:21.031 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:22:21.031 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:22:21.032 Removing: /var/run/dpdk/spdk0/hugepage_info 00:22:21.032 Removing: /var/run/dpdk/spdk1/config 00:22:21.032 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:22:21.032 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:22:21.032 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:22:21.032 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:22:21.032 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:22:21.032 Removing: /var/run/dpdk/spdk1/hugepage_info 00:22:21.032 Removing: /var/run/dpdk/spdk2/config 00:22:21.032 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:22:21.032 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:22:21.032 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:22:21.032 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:22:21.032 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:22:21.032 Removing: /var/run/dpdk/spdk2/hugepage_info 00:22:21.032 Removing: /var/run/dpdk/spdk3/config 00:22:21.032 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:22:21.032 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:22:21.032 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:22:21.032 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:22:21.032 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:22:21.032 Removing: /var/run/dpdk/spdk3/hugepage_info 00:22:21.032 Removing: /var/run/dpdk/spdk4/config 00:22:21.032 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:22:21.032 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:22:21.032 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:22:21.032 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:22:21.032 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:22:21.032 Removing: /var/run/dpdk/spdk4/hugepage_info 00:22:21.032 Removing: /dev/shm/nvmf_trace.0 00:22:21.032 Removing: /dev/shm/spdk_tgt_trace.pid58792 00:22:21.032 Removing: /var/run/dpdk/spdk0 00:22:21.032 Removing: /var/run/dpdk/spdk1 00:22:21.032 Removing: /var/run/dpdk/spdk2 00:22:21.032 Removing: /var/run/dpdk/spdk3 00:22:21.032 Removing: /var/run/dpdk/spdk4 00:22:21.032 Removing: /var/run/dpdk/spdk_pid58641 00:22:21.032 Removing: /var/run/dpdk/spdk_pid58792 00:22:21.032 Removing: /var/run/dpdk/spdk_pid58979 00:22:21.032 Removing: /var/run/dpdk/spdk_pid59071 00:22:21.032 Removing: /var/run/dpdk/spdk_pid59093 00:22:21.032 Removing: /var/run/dpdk/spdk_pid59208 00:22:21.032 Removing: /var/run/dpdk/spdk_pid59226 00:22:21.032 Removing: /var/run/dpdk/spdk_pid59344 00:22:21.032 Removing: /var/run/dpdk/spdk_pid59535 00:22:21.032 Removing: /var/run/dpdk/spdk_pid59675 00:22:21.032 Removing: /var/run/dpdk/spdk_pid59752 00:22:21.032 Removing: /var/run/dpdk/spdk_pid59828 00:22:21.032 Removing: /var/run/dpdk/spdk_pid59913 00:22:21.032 Removing: /var/run/dpdk/spdk_pid59990 00:22:21.032 Removing: /var/run/dpdk/spdk_pid60029 00:22:21.032 Removing: /var/run/dpdk/spdk_pid60059 00:22:21.032 Removing: /var/run/dpdk/spdk_pid60126 00:22:21.032 Removing: /var/run/dpdk/spdk_pid60220 00:22:21.032 Removing: /var/run/dpdk/spdk_pid60658 00:22:21.290 Removing: /var/run/dpdk/spdk_pid60710 00:22:21.290 Removing: /var/run/dpdk/spdk_pid60761 00:22:21.290 Removing: /var/run/dpdk/spdk_pid60777 00:22:21.290 Removing: /var/run/dpdk/spdk_pid60844 00:22:21.290 Removing: /var/run/dpdk/spdk_pid60860 00:22:21.290 Removing: /var/run/dpdk/spdk_pid60927 00:22:21.290 Removing: /var/run/dpdk/spdk_pid60943 00:22:21.290 Removing: /var/run/dpdk/spdk_pid60989 00:22:21.290 Removing: /var/run/dpdk/spdk_pid61007 00:22:21.290 Removing: /var/run/dpdk/spdk_pid61047 00:22:21.290 Removing: /var/run/dpdk/spdk_pid61065 00:22:21.290 Removing: /var/run/dpdk/spdk_pid61193 00:22:21.290 Removing: /var/run/dpdk/spdk_pid61223 00:22:21.290 Removing: /var/run/dpdk/spdk_pid61303 00:22:21.290 Removing: /var/run/dpdk/spdk_pid61349 00:22:21.290 Removing: /var/run/dpdk/spdk_pid61379 00:22:21.290 Removing: /var/run/dpdk/spdk_pid61432 00:22:21.290 Removing: /var/run/dpdk/spdk_pid61472 00:22:21.290 Removing: /var/run/dpdk/spdk_pid61501 00:22:21.290 Removing: /var/run/dpdk/spdk_pid61541 00:22:21.290 Removing: /var/run/dpdk/spdk_pid61576 00:22:21.290 Removing: /var/run/dpdk/spdk_pid61610 00:22:21.290 Removing: /var/run/dpdk/spdk_pid61645 00:22:21.290 Removing: /var/run/dpdk/spdk_pid61681 00:22:21.290 Removing: /var/run/dpdk/spdk_pid61716 00:22:21.290 Removing: /var/run/dpdk/spdk_pid61750 00:22:21.290 Removing: /var/run/dpdk/spdk_pid61785 00:22:21.290 Removing: /var/run/dpdk/spdk_pid61819 00:22:21.290 Removing: /var/run/dpdk/spdk_pid61854 00:22:21.290 Removing: /var/run/dpdk/spdk_pid61890 00:22:21.290 Removing: /var/run/dpdk/spdk_pid61926 00:22:21.290 Removing: /var/run/dpdk/spdk_pid61960 00:22:21.290 Removing: /var/run/dpdk/spdk_pid61995 00:22:21.290 Removing: /var/run/dpdk/spdk_pid62038 00:22:21.290 Removing: /var/run/dpdk/spdk_pid62070 00:22:21.290 Removing: /var/run/dpdk/spdk_pid62110 00:22:21.290 Removing: /var/run/dpdk/spdk_pid62140 00:22:21.290 Removing: /var/run/dpdk/spdk_pid62210 00:22:21.290 Removing: /var/run/dpdk/spdk_pid62303 00:22:21.290 Removing: /var/run/dpdk/spdk_pid62611 00:22:21.290 Removing: /var/run/dpdk/spdk_pid62623 00:22:21.290 Removing: /var/run/dpdk/spdk_pid62654 00:22:21.290 Removing: /var/run/dpdk/spdk_pid62673 00:22:21.290 Removing: /var/run/dpdk/spdk_pid62694 00:22:21.290 Removing: /var/run/dpdk/spdk_pid62713 00:22:21.290 Removing: /var/run/dpdk/spdk_pid62732 00:22:21.290 Removing: /var/run/dpdk/spdk_pid62748 00:22:21.290 Removing: /var/run/dpdk/spdk_pid62772 00:22:21.290 Removing: /var/run/dpdk/spdk_pid62786 00:22:21.290 Removing: /var/run/dpdk/spdk_pid62801 00:22:21.290 Removing: /var/run/dpdk/spdk_pid62828 00:22:21.290 Removing: /var/run/dpdk/spdk_pid62841 00:22:21.290 Removing: /var/run/dpdk/spdk_pid62857 00:22:21.290 Removing: /var/run/dpdk/spdk_pid62881 00:22:21.290 Removing: /var/run/dpdk/spdk_pid62895 00:22:21.290 Removing: /var/run/dpdk/spdk_pid62910 00:22:21.290 Removing: /var/run/dpdk/spdk_pid62935 00:22:21.290 Removing: /var/run/dpdk/spdk_pid62948 00:22:21.290 Removing: /var/run/dpdk/spdk_pid62968 00:22:21.290 Removing: /var/run/dpdk/spdk_pid63000 00:22:21.290 Removing: /var/run/dpdk/spdk_pid63019 00:22:21.290 Removing: /var/run/dpdk/spdk_pid63048 00:22:21.290 Removing: /var/run/dpdk/spdk_pid63107 00:22:21.290 Removing: /var/run/dpdk/spdk_pid63141 00:22:21.290 Removing: /var/run/dpdk/spdk_pid63150 00:22:21.290 Removing: /var/run/dpdk/spdk_pid63179 00:22:21.290 Removing: /var/run/dpdk/spdk_pid63194 00:22:21.290 Removing: /var/run/dpdk/spdk_pid63196 00:22:21.290 Removing: /var/run/dpdk/spdk_pid63244 00:22:21.290 Removing: /var/run/dpdk/spdk_pid63263 00:22:21.290 Removing: /var/run/dpdk/spdk_pid63286 00:22:21.290 Removing: /var/run/dpdk/spdk_pid63301 00:22:21.290 Removing: /var/run/dpdk/spdk_pid63305 00:22:21.290 Removing: /var/run/dpdk/spdk_pid63320 00:22:21.290 Removing: /var/run/dpdk/spdk_pid63335 00:22:21.290 Removing: /var/run/dpdk/spdk_pid63339 00:22:21.290 Removing: /var/run/dpdk/spdk_pid63354 00:22:21.290 Removing: /var/run/dpdk/spdk_pid63358 00:22:21.290 Removing: /var/run/dpdk/spdk_pid63392 00:22:21.290 Removing: /var/run/dpdk/spdk_pid63424 00:22:21.290 Removing: /var/run/dpdk/spdk_pid63428 00:22:21.290 Removing: /var/run/dpdk/spdk_pid63462 00:22:21.290 Removing: /var/run/dpdk/spdk_pid63472 00:22:21.290 Removing: /var/run/dpdk/spdk_pid63479 00:22:21.290 Removing: /var/run/dpdk/spdk_pid63525 00:22:21.290 Removing: /var/run/dpdk/spdk_pid63537 00:22:21.290 Removing: /var/run/dpdk/spdk_pid63563 00:22:21.290 Removing: /var/run/dpdk/spdk_pid63575 00:22:21.290 Removing: /var/run/dpdk/spdk_pid63584 00:22:21.290 Removing: /var/run/dpdk/spdk_pid63591 00:22:21.290 Removing: /var/run/dpdk/spdk_pid63599 00:22:21.290 Removing: /var/run/dpdk/spdk_pid63611 00:22:21.290 Removing: /var/run/dpdk/spdk_pid63614 00:22:21.290 Removing: /var/run/dpdk/spdk_pid63627 00:22:21.290 Removing: /var/run/dpdk/spdk_pid63701 00:22:21.290 Removing: /var/run/dpdk/spdk_pid63743 00:22:21.290 Removing: /var/run/dpdk/spdk_pid63853 00:22:21.290 Removing: /var/run/dpdk/spdk_pid63892 00:22:21.290 Removing: /var/run/dpdk/spdk_pid63937 00:22:21.290 Removing: /var/run/dpdk/spdk_pid63952 00:22:21.290 Removing: /var/run/dpdk/spdk_pid63974 00:22:21.290 Removing: /var/run/dpdk/spdk_pid63988 00:22:21.290 Removing: /var/run/dpdk/spdk_pid64025 00:22:21.290 Removing: /var/run/dpdk/spdk_pid64041 00:22:21.290 Removing: /var/run/dpdk/spdk_pid64111 00:22:21.290 Removing: /var/run/dpdk/spdk_pid64132 00:22:21.549 Removing: /var/run/dpdk/spdk_pid64183 00:22:21.549 Removing: /var/run/dpdk/spdk_pid64244 00:22:21.549 Removing: /var/run/dpdk/spdk_pid64307 00:22:21.549 Removing: /var/run/dpdk/spdk_pid64336 00:22:21.549 Removing: /var/run/dpdk/spdk_pid64428 00:22:21.549 Removing: /var/run/dpdk/spdk_pid64470 00:22:21.549 Removing: /var/run/dpdk/spdk_pid64508 00:22:21.549 Removing: /var/run/dpdk/spdk_pid64731 00:22:21.549 Removing: /var/run/dpdk/spdk_pid64824 00:22:21.549 Removing: /var/run/dpdk/spdk_pid64853 00:22:21.549 Removing: /var/run/dpdk/spdk_pid65170 00:22:21.549 Removing: /var/run/dpdk/spdk_pid65208 00:22:21.549 Removing: /var/run/dpdk/spdk_pid65505 00:22:21.549 Removing: /var/run/dpdk/spdk_pid65908 00:22:21.549 Removing: /var/run/dpdk/spdk_pid66186 00:22:21.549 Removing: /var/run/dpdk/spdk_pid66968 00:22:21.549 Removing: /var/run/dpdk/spdk_pid67789 00:22:21.549 Removing: /var/run/dpdk/spdk_pid67911 00:22:21.549 Removing: /var/run/dpdk/spdk_pid67979 00:22:21.549 Removing: /var/run/dpdk/spdk_pid69236 00:22:21.549 Removing: /var/run/dpdk/spdk_pid69442 00:22:21.549 Removing: /var/run/dpdk/spdk_pid72794 00:22:21.549 Removing: /var/run/dpdk/spdk_pid73109 00:22:21.549 Removing: /var/run/dpdk/spdk_pid73217 00:22:21.549 Removing: /var/run/dpdk/spdk_pid73353 00:22:21.549 Removing: /var/run/dpdk/spdk_pid73375 00:22:21.549 Removing: /var/run/dpdk/spdk_pid73403 00:22:21.549 Removing: /var/run/dpdk/spdk_pid73436 00:22:21.549 Removing: /var/run/dpdk/spdk_pid73532 00:22:21.549 Removing: /var/run/dpdk/spdk_pid73667 00:22:21.549 Removing: /var/run/dpdk/spdk_pid73817 00:22:21.549 Removing: /var/run/dpdk/spdk_pid73892 00:22:21.549 Removing: /var/run/dpdk/spdk_pid74088 00:22:21.549 Removing: /var/run/dpdk/spdk_pid74177 00:22:21.549 Removing: /var/run/dpdk/spdk_pid74270 00:22:21.549 Removing: /var/run/dpdk/spdk_pid74583 00:22:21.549 Removing: /var/run/dpdk/spdk_pid74961 00:22:21.549 Removing: /var/run/dpdk/spdk_pid74963 00:22:21.549 Removing: /var/run/dpdk/spdk_pid75247 00:22:21.549 Removing: /var/run/dpdk/spdk_pid75267 00:22:21.549 Removing: /var/run/dpdk/spdk_pid75281 00:22:21.549 Removing: /var/run/dpdk/spdk_pid75316 00:22:21.549 Removing: /var/run/dpdk/spdk_pid75322 00:22:21.549 Removing: /var/run/dpdk/spdk_pid75623 00:22:21.549 Removing: /var/run/dpdk/spdk_pid75666 00:22:21.549 Removing: /var/run/dpdk/spdk_pid75947 00:22:21.549 Removing: /var/run/dpdk/spdk_pid76149 00:22:21.549 Removing: /var/run/dpdk/spdk_pid76528 00:22:21.549 Removing: /var/run/dpdk/spdk_pid77030 00:22:21.549 Removing: /var/run/dpdk/spdk_pid77838 00:22:21.549 Removing: /var/run/dpdk/spdk_pid78426 00:22:21.549 Removing: /var/run/dpdk/spdk_pid78428 00:22:21.549 Removing: /var/run/dpdk/spdk_pid80342 00:22:21.549 Removing: /var/run/dpdk/spdk_pid80402 00:22:21.549 Removing: /var/run/dpdk/spdk_pid80468 00:22:21.549 Removing: /var/run/dpdk/spdk_pid80523 00:22:21.549 Removing: /var/run/dpdk/spdk_pid80648 00:22:21.549 Removing: /var/run/dpdk/spdk_pid80711 00:22:21.549 Removing: /var/run/dpdk/spdk_pid80771 00:22:21.549 Removing: /var/run/dpdk/spdk_pid80826 00:22:21.549 Removing: /var/run/dpdk/spdk_pid81154 00:22:21.549 Removing: /var/run/dpdk/spdk_pid82309 00:22:21.549 Removing: /var/run/dpdk/spdk_pid82449 00:22:21.549 Removing: /var/run/dpdk/spdk_pid82692 00:22:21.549 Removing: /var/run/dpdk/spdk_pid83247 00:22:21.549 Removing: /var/run/dpdk/spdk_pid83405 00:22:21.549 Removing: /var/run/dpdk/spdk_pid83563 00:22:21.549 Removing: /var/run/dpdk/spdk_pid83660 00:22:21.549 Removing: /var/run/dpdk/spdk_pid83824 00:22:21.549 Removing: /var/run/dpdk/spdk_pid83933 00:22:21.549 Removing: /var/run/dpdk/spdk_pid84586 00:22:21.549 Removing: /var/run/dpdk/spdk_pid84624 00:22:21.549 Removing: /var/run/dpdk/spdk_pid84660 00:22:21.549 Removing: /var/run/dpdk/spdk_pid84912 00:22:21.549 Removing: /var/run/dpdk/spdk_pid84947 00:22:21.549 Removing: /var/run/dpdk/spdk_pid84977 00:22:21.549 Removing: /var/run/dpdk/spdk_pid85399 00:22:21.549 Removing: /var/run/dpdk/spdk_pid85416 00:22:21.549 Removing: /var/run/dpdk/spdk_pid85671 00:22:21.549 Removing: /var/run/dpdk/spdk_pid85785 00:22:21.549 Removing: /var/run/dpdk/spdk_pid85803 00:22:21.549 Clean 00:22:21.549 08:35:13 -- common/autotest_common.sh@1451 -- # return 0 00:22:21.549 08:35:13 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:22:21.549 08:35:13 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:21.549 08:35:13 -- common/autotest_common.sh@10 -- # set +x 00:22:21.807 08:35:13 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:22:21.807 08:35:13 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:21.807 08:35:13 -- common/autotest_common.sh@10 -- # set +x 00:22:21.807 08:35:13 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:22:21.807 08:35:13 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:22:21.807 08:35:13 -- spdk/autotest.sh@389 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:22:21.807 08:35:13 -- spdk/autotest.sh@391 -- # hash lcov 00:22:21.807 08:35:13 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:22:21.807 08:35:13 -- spdk/autotest.sh@393 -- # hostname 00:22:21.807 08:35:13 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:22:22.066 geninfo: WARNING: invalid characters removed from testname! 00:22:48.629 08:35:39 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:51.912 08:35:43 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:54.441 08:35:46 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:56.972 08:35:49 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:00.279 08:35:51 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:02.815 08:35:54 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:05.368 08:35:57 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:23:05.368 08:35:57 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:05.368 08:35:57 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:23:05.368 08:35:57 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:05.368 08:35:57 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:05.368 08:35:57 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.368 08:35:57 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.368 08:35:57 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.368 08:35:57 -- paths/export.sh@5 -- $ export PATH 00:23:05.368 08:35:57 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.369 08:35:57 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:23:05.369 08:35:57 -- common/autobuild_common.sh@444 -- $ date +%s 00:23:05.369 08:35:57 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721032557.XXXXXX 00:23:05.369 08:35:57 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721032557.amq8L6 00:23:05.369 08:35:57 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:23:05.369 08:35:57 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:23:05.369 08:35:57 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:23:05.369 08:35:57 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:23:05.369 08:35:57 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:23:05.369 08:35:57 -- common/autobuild_common.sh@460 -- $ get_config_params 00:23:05.369 08:35:57 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:23:05.369 08:35:57 -- common/autotest_common.sh@10 -- $ set +x 00:23:05.369 08:35:57 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:23:05.369 08:35:57 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:23:05.369 08:35:57 -- pm/common@17 -- $ local monitor 00:23:05.369 08:35:57 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:23:05.369 08:35:57 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:23:05.369 08:35:57 -- pm/common@25 -- $ sleep 1 00:23:05.369 08:35:57 -- pm/common@21 -- $ date +%s 00:23:05.369 08:35:57 -- pm/common@21 -- $ date +%s 00:23:05.369 08:35:57 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721032557 00:23:05.369 08:35:57 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721032557 00:23:05.369 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721032557_collect-vmstat.pm.log 00:23:05.369 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721032557_collect-cpu-load.pm.log 00:23:06.346 08:35:58 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:23:06.346 08:35:58 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:23:06.347 08:35:58 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:23:06.347 08:35:58 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:23:06.347 08:35:58 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:23:06.347 08:35:58 -- spdk/autopackage.sh@19 -- $ timing_finish 00:23:06.347 08:35:58 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:23:06.347 08:35:58 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:23:06.347 08:35:58 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:23:06.347 08:35:58 -- spdk/autopackage.sh@20 -- $ exit 0 00:23:06.347 08:35:58 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:23:06.347 08:35:58 -- pm/common@29 -- $ signal_monitor_resources TERM 00:23:06.347 08:35:58 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:23:06.347 08:35:58 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:23:06.347 08:35:58 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:23:06.347 08:35:58 -- pm/common@44 -- $ pid=87528 00:23:06.347 08:35:58 -- pm/common@50 -- $ kill -TERM 87528 00:23:06.347 08:35:58 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:23:06.347 08:35:58 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:23:06.347 08:35:58 -- pm/common@44 -- $ pid=87529 00:23:06.347 08:35:58 -- pm/common@50 -- $ kill -TERM 87529 00:23:06.347 + [[ -n 5098 ]] 00:23:06.347 + sudo kill 5098 00:23:06.356 [Pipeline] } 00:23:06.376 [Pipeline] // timeout 00:23:06.382 [Pipeline] } 00:23:06.400 [Pipeline] // stage 00:23:06.405 [Pipeline] } 00:23:06.424 [Pipeline] // catchError 00:23:06.433 [Pipeline] stage 00:23:06.435 [Pipeline] { (Stop VM) 00:23:06.450 [Pipeline] sh 00:23:06.729 + vagrant halt 00:23:10.909 ==> default: Halting domain... 00:23:16.275 [Pipeline] sh 00:23:16.559 + vagrant destroy -f 00:23:20.770 ==> default: Removing domain... 00:23:20.781 [Pipeline] sh 00:23:21.097 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:23:21.106 [Pipeline] } 00:23:21.125 [Pipeline] // stage 00:23:21.130 [Pipeline] } 00:23:21.200 [Pipeline] // dir 00:23:21.205 [Pipeline] } 00:23:21.219 [Pipeline] // wrap 00:23:21.224 [Pipeline] } 00:23:21.238 [Pipeline] // catchError 00:23:21.247 [Pipeline] stage 00:23:21.248 [Pipeline] { (Epilogue) 00:23:21.264 [Pipeline] sh 00:23:21.540 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:23:28.128 [Pipeline] catchError 00:23:28.129 [Pipeline] { 00:23:28.140 [Pipeline] sh 00:23:28.412 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:23:28.412 Artifacts sizes are good 00:23:28.421 [Pipeline] } 00:23:28.438 [Pipeline] // catchError 00:23:28.448 [Pipeline] archiveArtifacts 00:23:28.453 Archiving artifacts 00:23:28.658 [Pipeline] cleanWs 00:23:28.691 [WS-CLEANUP] Deleting project workspace... 00:23:28.691 [WS-CLEANUP] Deferred wipeout is used... 00:23:28.697 [WS-CLEANUP] done 00:23:28.698 [Pipeline] } 00:23:28.709 [Pipeline] // stage 00:23:28.714 [Pipeline] } 00:23:28.728 [Pipeline] // node 00:23:28.733 [Pipeline] End of Pipeline 00:23:28.753 Finished: SUCCESS